Essays – Sheep Guarding Llama https://sheepguardingllama.com Scott Alan Miller :: A Life Online Wed, 22 Feb 2017 19:39:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Titanic Project Management & Comparison with Software Projects https://sheepguardingllama.com/2017/02/titanic-project-management-comparison-with-software-projects/ Mon, 20 Feb 2017 21:55:46 +0000 http://www.sheepguardingllama.com/?p=13635 Continue reading "Titanic Project Management & Comparison with Software Projects"

]]>
Few projects have ever taken on the fame and notoriety of that achieved by the Titanic and her sister Olympic ships, the Olympic and the Britannic, which began design one hundred and ten years ago this year.  There are, of course, many lessons that we can learn from the fate of the Olympic ships in regards to project management and, in fact, there are many aspects of project management that are worth covering.

(When referring to the ships as a whole I will simply reference them as The Olympics as the three together were White Star Line’s Olympic Class ships.  Titanic’s individual and latter fame is irrelevant here.  Also, I am taking the position here that the general information pertaining to the Olympic ships, their history and fate are common knowledge to the reader and will not cover them again.)

Given the frequency with which the project management of the Olympics has been covered, I think that it is more prudent to look at a few modern parallels where we can view current project management in today’s world through a valuable historic lens.  It is very much the case that project management is a discipline that has endured for millennia and many of the challenges, skills and techniques have not changed so much and the pitfalls of the past still very much apply to us today.  The old adage applies, if we don’t learn from the past we are doomed to repeat it.

My goal here, then, is to examine the risk analysis, perception and profile of the project and apply that to modern project management.

First, we must identify the stakeholders in the Olympics project. White Star Lines itself (sponsoring company and primary investor) and its director Joseph Bruce Ismay, Harland-Wolff (contracted ship builder) with its principle designers Alexander Carlisle and Thomas Andrews, the ships’ crew which includes Captain Edward John Smith, the British government as we will see later and, most importantly, the passengers.

As with any group of stakeholders there are different roles that are played.  White Star on one side is the sponsor and investor and in a modern software project would be analogous to a sponsoring customer, manager or department.  Harland-Wolff were the designers and builders and were most closely related to software engineering “team members” in a modern software team, the developers themselves.  The crew of the ships were responsible for operations after the project was completed and would be comparable to an IT operations team taking over the running of the final software after completion.  The passengers were much as end users today, hoping to benefit from both the engineering deliverable (ship or software) and the service build on top of that product (ferry service or IT managed services.) (“Olympic”)

Another axis of analysis of the project is that of chicken and pig stakeholders where chickens are invested and carry risk while pigs are fully invested and carry ultimate risk.  In normal software we use these comparatives to talk about degrees of stakeholders – those which are involved versus those that are committed, but in the case of the Olympic ships these terms take on new and horrific meaning as the crew and passengers literally put their lives on the line in the operational phase of the ships, whereas the investors and builders were only financially at risk. (Schwaber)

Second, I believe that it is useful to distinguish between different projects that exist within the context of the Olympics.  There was, of course, the design and construction of the three ships physically.  This is a single project, with two clear components – one of design and one of construction.  And three discrete deliverables, namely the three Olympic vessels.  There is, at the end of the construction phase, an extremely clear delineation point where the project managers and teams involved in the assembly of the ship would stop work and the crew that operated the ship would take over.

Here we can already draw an important analogue to the modern world of technology where software products are designed and developed by software engineers and, when they are complete, are handed over to the IT operational staff who take over the actual intended use of the final product.  These two teams may be internal under a single organizational umbrella or from two, or more, very separate organizations.  But the separation between the engineering and the operational departments has remained just as clear and distinct in most businesses today as it was for ship building and ferry service over a century ago.

We can go a step farther and compare White Star’s transatlantic ferry service to many modern software as a service vendors such as Microsoft Office 365, Salesforce or G Suite.  In these cases the company in question has an engineering or product development team that creates the core product and then a second team that takes that in-house product and operates it as a service.  This is increasingly an important business model in the software development space that the same company creating the software will be the ultimate operator of it, but for external clients.  In many ways the relevance of the Olympics to modern software and IT is increasing rather than decreasing.

This brings up an important interface understanding that was missed on the Olympics and is often missed today: each side of the hand-off believed that the other side was ultimately responsible for safety.  The engineers touted their safety of design, but when pushed were willing to compromise assuming that operational procedures would mitigate the risks and that their own efforts were largely redundant.  Likewise, when pushed to keep things moving and make good time the operations team were willing to compromise on procedures because they believed that the engineering team had gone so far as to make their efforts essentially wasted, the ship being so safe that operational precautions just were not warranted.  This miscommunication took the endeavor from having two types of systems of extreme safety down to basically none.  Had either side understood how the other would or did operate, they could have taken that into account.  In the end, both sides assumed, at least to some degree, that safety was the “other team’s job”.  While the ship was advertised heavily based on safety, the reality was that it continued the general trend of the past half century plus, where each year ships were made and operated less safely than the year before. (Brander 1995)

Today we see this same problem arising between IT and software engineering – less around stability (although that certainly remains true) but now about security, which can be viewed similarly to safety in the Olympics’ context.  Security has become one of the most important topics of the last decade on both sides of the technology fence and the industry faces the challenges created by the need for both sides to action security practices thoroughly – neither is capable of truly implementing secure systems alone. Planning for safety or security is simply not a substitute for enforcing it procedurally during operations.

An excellent comparison today is British Airways and how they approach every flight that they oversee as it crosses the Atlantic.  As the primary carrier of air traffic over the North Atlantic, the same path that the Olympics were intended to traverse, British Airways has to maintain a reputation for excellence in safety.  Even in 2017, flying over the North Atlantic is a precarious and complicated journey.

Before any British Airways flight takes off, the pilots and crew must review a three hundred page mission manual that tells them everything that is going on including details on the plane, crew, weather and so forth.  This process is so intense that British Airways refuses to even acknowledge that it is a flight, but officially refers to every single trip over the Atlantic as a “mission”; specifically to drive home to everyone involved the severity and risk involved in such an endeavor.  They clearly understand the importance of changing how people think about a trip such as this and are aware of what can happen should people begin to assume that everyone else will have done their job well and that they can cut corners on their own job.  They want no one to become careless or begin to feel that the flight, even though completed several times each day, is ever routine. (Winchester)

Had the British Airways approach been used with the Titanic, it is very likely that disaster would not have struck when it did.  The operational side alone could have prevented the disaster.  Likewise, had the ship engineers been held to the same standards as Boeing or AirBus today they likely would not have been so easily pressured by management to modify the safety requirements as they worked on the project.

What really affected the Olympics, in many ways, was a form of unchecked scope creep.  The project began as a traditional waterfall approach with “big design up front” and the initial requirements were good with safety playing a critical role.  Had the original project requirements and even much of the original design been used, the ships would have been far safer than they were.  But new requirements for larger dining rooms or more luxurious appointments took precedence and the scope and parameters of the project were changed to accommodate these new changes.  As with any project, no change happens in a vacuum but will have ramifications for other factors such as cost, safety or delivery date. (Sadur)

The scope creep on the Titanic specifically was dramatic, but hidden and not necessarily obvious for the most part.  It is easy to point out small changes such as a shift of dining room size, but of much greater importance was the change in the time frame in which the ship had to be delivered.  What really altered the scope was actually that initial deadlines and projects had to be maintained, relatively strictly.  This was specifically problematic because in the midst of Titanic’s dry dock work and later moored work, the older sibling, Olympic, was brought in for extensive repairs multiple times which had a very large impact on the amount of time in the original schedule available for Titanic’s own work to be completed.  This type of scope modification is very easy to overlook or ignore, especially in hindsight, as the physical deliverables and the original dates did not change in any dramatic way.  For all intents and purposes, however, Titanic was rushed through production much faster than had been originally planned.

In modern software engineering it is well accepted that no one can estimate the amount of time that a design task will take as well as the engineer(s) that will be doing the task themselves.  It is also generally accepted that there is no means of significantly speeding up engineering and design efforts through management pressure. Once a project is running at maximum speed, it is not going to go faster.  Attempts to go faster will often lead to mistakes, oversights or misses.  We know this to be true in software and can assume that it must have been true for ship design as well as the principles are the same.  Had the Titanic been given the appropriate amount of time for this process, it is possible that safety measures would have been more thoroughly considered or at least properly communicated to the operational team at hand off.  Teams that are rushed are forced to compromise and since time cannot be adjusted as it is the constraint, the corners have to be cut somewhere else and, almost always that comes from quality and thoroughness.  This might manifest itself as a mistake or perhaps as failing to fully review all of the factors involved when changing one portion of a design.

This brings us to holistic design thinking. At the beginning of the project the Olympics were designed with safety in mind: safety that results from the careful inter-workings of many separate systems that together are intended to make for a highly reliable ship.  We cannot look at the components of a ship of this magnitude individually, they make no sense – the design of the hull, the style of the decks, the weight of the cargo, the materials used, the style of the bulkheads are all interrelated and must function together.

When the project was pushed to complete more quickly or to change parameters this holistic thinking and a clear revisiting of earlier decisions was not done or not done adequately.  Rather, individual components were altered irrespective of how that would impact their role without the whole of the ship and the resulting impact to overall safety.  What may have seemed like a minor change had unintended consequences that were unforeseen because holistic project management was abandoned.  (Kozak-Holland)

This change to the engineering was mirrored, of course, in operations.  Each change, such as not using binoculars or not taking ice bucket readings, were individually somewhat minor, but taken together they were incredibly impactful.  Likely, but we cannot be sure, a cohesive project management or, at least, process improvement system was not being used.  Who was overseeing that binoculars were used, that the water tests were accurate and so forth?  Any check at all would have revealed that the tools needed for those tasks did not exist, at all.  There is no way that so much as a simple test run of the procedures could have been performed, let alone regular checking and process improvement.  Process improvement is especially highlighted by the fact that Captain Smith had had practice on the RMS Olympic, caused an at-sea collision on her fifth voyage and then nearly repeated the same mistake with the initial launch of the Titanic.  What should have been an important lesson learned by all captains and pilots of the Olympic ships instead was ignored and repeated, almost immediately. (“Olympic”)

Of course ship building and software are very different things, but many lessons can be shared.  One of the most important lessons is to see the limitations faced by ship building and to recognize when we are not forced to retain these same limitations when working with software.  The Olympic and Titanic were built nearly at the same time with absolutely no time for engineering knowledge gleaned from the Olympic’s construction, let alone her operation, to get to be applied to the Titanic’s construction.  In modern software we would never expect such a constraint and would be able to test software, at least to some small degree, before moving on to additional software that is based upon it either in real code or even conceptually.  Project management today needs to leverage the differences that exist both in more modern times and in our different industry to the best of its advantage.  Some software projects still do require processes like this but these have become more and more rare over time and today are dramatically less common than they were just twenty years ago.

It is well worth evaluating the work that was done by Harland-Wolff with the Olympics as they strove very evidently to incorporate what feedback loops were possible within their purview at the time.  Not only did they attempt to use the construction of earlier ships to learn more for the later ones, although this was very limited as the ships were mostly under construction concurrently and most lessons would not have had time to have been applied, but far more importantly they took the extraordinary step of having a “guarantee group” sail with the ships.  This guarantee group consisted of all manner of apprentice and master ship builders from all manner of support trades.  (“Guarantee Group”)

The use of the guarantee group for direct feedback was, and truly remains, unprecedented and was an enormous investment in hard cost and time for the ship builders to sacrifice so many valuable workers to sale in luxury back and forth across the Atlantic.  The group was able to inspect their work first hand, see it in action, gain an understanding of its use within the context of the working ship, work together on team building, knowledge transfers and more.  This was far more valuable than the feedback from the ship yards where the ships were overlapping in construction, this was a strong investment in the future of their ship building enterprise: a commitment to industrial education that would likely have benefited them for decades.

Modern deployment styles, tools and education have led from the vast majority of software being created under a Waterfall methodology not so distinct from that used in turn of the [last] century shipbuilding, to most leveraging some degree of Agile methodologies allowing for rapid testing, evaluation, changes and deployment.  Scope creep has changed from something that has to be mitigated or heavily managed to something that can be treated as expected and assumed within the development process even to the point of almost being leveraged.  One of the fundamental problems with big design up front is that it always requires the customer or customer-role stakeholder to make “big decisions up front” which are often far harder for them to make than the design is for the engineers.  These early decisions are often a primary contributor to scope creep or to later change requests and can often be reduced or avoided by agile processes that expect continuous change to occur to requirements and build that into the process.

The shipbuilders, Harlan and Wolff, did build a fifteen foot model of the Olympic for testing which is useful to some degree, but of course failed to mimic the hydrological action that the full size ship would later produce and failed to predict some of the more dangerous side effects of the new vessel’s size when close to other ships which led to the first accident of the group and to what was nearly a second.  The builders do appear to have made every effort to test and learn at every stage available to them throughout the design and construction process. (Kozak-Holland)

In comparison to modern project management this would be comparable to producing a rapid mock-up or wireframe for developers or even customers to get hands-on experience with before investing further effort into what might be a dead end path for unforeseen reasons.  This is especially important in user interface design where there is often little ability to properly predict usability or satisfaction ratings without providing a chance for actual users to physically manipulate the system and judge for themselves if it provides the experience for which they are looking. (Esposito)

We must, of course, consider the risk that the Olympics undertook within the context of their historical juxtaposition in regards to financial trends and forces.  At the time, starting from the middle of the previous century, the prevailing financial thinking was that it was best to lean towards the risky, rather than towards the safe – in terms of loss of life, cargo or ships; and to overcome the difference via insurance vehicles.  It was simply too financially advantageous for the ships to operate in a risky manner than to be overly cautious about human life. This trend, by the time of the Olympics, had been well established for nearly sixty years and would not begin to change until the heavy publicity of the Titanic sinking.  The market impact to the public did not exist until the “unsinkable” ship, with so many souls aboard, was lost in such a spectacular way.

This approach to risk and its financial trade offs is one that project managers must understand today the same as they did over one hundred years ago.  It is easy to be caught believing that risk is so important that it is worth any cost to eliminate, but projects cannot think this way.  It is possible to expend unlimited resources in the pursuit of risk reduction.  In the real world it is necessary that we balance risks with the cost of risk mitigation.  A great example of this in modern times, but outside that of software development specifically, is in the handling of credit card fraud in the United States.  Until just the past few years, it has generally been the opinion of the US credit card industry that the cost of greater security measures on credit cards to prevent theft were too high compared to the risks of not having them; essentially it has been more cost effective to spend money in reimbursing fake transactions than it was to prevent those fake transactions. This cost to risk ratio can sometimes be counterintuitive and even frustrating, but is one that has to drive project decisions in a logical, calculated fashion.

In a similar vein, it is common in IT to design systems believing that downtime is an essentially unlimited cost and to spend vastly more attempting to mitigate a downtime risk than the cost of the actual outage event itself would likely be if it were to occur.  This is obviously foolish, but so rarely are cost analysis of this type run or run correctly it becomes far too easy to fall prey to this mentality.  In software engineering projects we must approach risks in a similar fashion.  Accepting that there is risk, of any sort, and determining the actual risk, the magnitude of the impact of that risk and comparing that against the cost of mitigation strategies is critical to making an appropriate project management decision in regards to the risk. (Brander 1995)

Also of particular interest to extremely large projects, of which the Olympics certainly qualified, there is an additional concept of being “too big to fail.”  This, of course, is a modern phrase that came about during the financial crisis of the past decade, but the concept and the reality of this is far older and a valuable consideration to any project that falls onto a scale that would register a “national financial disaster” should the project totally falter.  In the case of the Olympics the British government ultimately insulated the investors from total disaster as the collapse of one of the largest passenger lines would have been devastating to the country at the time.

White Star Lines was simply “too big to fail” and was kept afloat, so to speak, by the government before being forcibly merged into Cunard some years later.  This concept, knowing that the government would not want to accept the risks of the company failing, may have been calculated or considered at the time, we do not know.  We do know, however, that this is taken into consideration today with very large projects.  An example of this happening currently is that of Lockheed Martin’s F-35 fighter which is dramatically over budget, past its delivery date and no longer even considered likely to be useful has been buoyed for years, but different government sponsors who see the project as too important, even in a state of failure to deliver, for the national economy to allow the project to fully collapse.  As this phenomenon becomes better and better known, it is likely that we will see more projects take this into consideration in their risk analysis phases. (Ellis)

Jumping to the operational side of the equation we could examine any number of aspects that went wrong leading to the sinking of the Titanic, but at the core I believe that what was most evident was a lack of standard operating procedures throughout the process.  This is understandable to some degree as the ship was on its maiden voyage and there was little time for process documentation and improvement.  However this was the flagship of a long standing shipping line that had a reputation to uphold and a great deal of experience in these matters.  It would also overlook that by the time that Titanic was attempting its first voyage that the Olympic had already been in service far more than enough to have developed a satisfactory set of standard operating procedures.

Baseline documentation would have been expected even on a maiden voyage, it is unreasonable to expect a ship of such scale to function at all unless there is coordination and communication among the crew.  There was plenty of time, years in fact, for basic crew operational procedures to be created and prepared before the first ship set sale and, of course, this would have to be done for all ships of this nature, but it was evident that such operating procedures were lacking, missing and untested in the case of the Titanic.

The party responsible for operating procedures would likely be identified as being from the operations side of the project equation, but there would need to be some degree of such documentation provided by or coordinated with the engineering and construction teams as well.  Many of the procedures that broke done on the Titanic included chain of command failures under pressure with the director of the company taking over the bridge and the captain allowing it, wireless operators being instructed to relay passenger messages as a priority over iceberg warnings, allowing wireless operators to tell other ships attempting to warn them to stop broadcasting, critical messages not being brought to the bridge, tools needed for critical jobs not being supplied and so forth. (Kuntz)

Much like was needed with the engineering and design of the ships, the operations of the ships needed strong and holistic guidance ensuring that the ship and its crew worked as a whole rather than looking at departments, such as the Marconi wireless operators, as an individual unit.  In that example, they were not officially crew of the ship but employees of Marconi who were on board to handle paid passenger communiques and to only handle ship emergency traffic if time allowed.  Had they been overseen as part of a holistic operational management system, even as outside contractors, it is likely that their procedures would have been far more safety focused or, at the very least, that service level agreements around getting messages to the bridge would have been clearly defined rather than ad hoc and discretionary.

In any project and project component, good documentation whether of project goals, deliverables, procedures and so forth are critical and project management has little hope of success if good communications and documentation are not at the heart of everything that we do, both internally within the project and externally with stakeholders.

What we find today is that the project management lessons of the Olympic, Titanic and Britannic remain valuable to us today and the context of the era whether pushing for iterative project design where possible, investing in tribal knowledge, calculating risk, understanding the roles of system engineering and system operations or the interactions of protective external forces on product costs are still relevant.  The factors that affect projects come and go in cycles, today we see trends leaning towards models more like the Olympics than dislike them. In the future, likely, the pendulum will swing back again.  The underlying lessons are very relevant and will continue to be so.  We can learn much both by evaluating how our own projects are similar to those of White Star and how they are different to them.

Bibliography and Sources Cited:

Schwaber, Ken. Agile Project Management with Scrum. Redmond: Microsoft Press, 2003.

Kuntz, Tom. Titanic Disaster Hearings: The Official Transcripts of the 1912 Senate Investigation, The. New York: Pocket Books, 1998. Audio Edition via Audible.

Kozak-Holland, Mark. Lessons from History: Titanic Lessons for IT Projects. Toronto: Multi-Media Publications, 2005.

Brown, David G. “Titanic.” Professional Mariner: The Journal of the Maritime Industry, February 2007.

Esposito, Dino. “Cutting Edge – Don’t Gamble with UX—Use Wireframes.” MSDN Magazine, January 2016.

Sadur, James E. Home page. “Jim’s Titanic Website: Titanic History Timeline.” (2005): 13 February 2017.

Winchester, Simon. “Atlantic.” Harper Perennial, 2011.

Titanic-Titanic. “Olympic.” (Date Unknown): 15 February 2017.

Titanic-Titanic. “Guarantee Group.” (Date Unknown): 15 February 2017.

Brander, Roy. P. Eng. “The RMS Titanic and its Times: When Accountants Ruled the Waves – 69th Shock & Vibration Symposium, Elias Kline Memorial Lecture”. (1998): 16 February 2017.

Brander, Roy. P. Eng. “The Titanic Disaster: An Enduring Example of Money Management vs. Risk Management.” (1995): 16 February 2017.

Ellis, Sam. “This jet fighter is a disaster, but Congress keeps buying it.”. Vox, 30 January 2017.

Additional Notes:

Mark Kozak-Holland originally published his book in 2003 as a series of Gantthead articles on the Titanic:

Kozak-Holland, Mark. “IT Project Lessons from Titanic.” Gantthead.com the Online Community for IT Project Managers and later ProjectManagement.com (2003): 8 February 2017.

More Reading:

Kozak-Holland, Mark. Avoiding Project Disaster: Titanic Lessons for IT Executives. Toronto: Multi-Media Publications, 2006.

Kozak-Holland, Mark. On-line, On-time, On-budget: Titanic Lessons for the e-Business Executive. IBM Press, 2002.

US Senate and British Official Hearing and Inquiry Transcripts from 1912 at the Titanic Inquiry Project.

]]>
The Cute Waitress Problem https://sheepguardingllama.com/2010/11/the-cute-waitress-problem/ https://sheepguardingllama.com/2010/11/the-cute-waitress-problem/#comments Wed, 24 Nov 2010 21:34:46 +0000 http://www.sheepguardingllama.com/?p=6201 Continue reading "The Cute Waitress Problem"

]]>
This is a life/career issue faced by people in many different career situations but is known as the “cute waitress problem” as that is where almost everyone has seen it arise and were it can most easily be demonstrated.

The problem goes thus:  A cute girl works at a diner during high school.  She earns far more money than she could doing “normal” high school jobs which generally pay minimum wage and have no benefits.  When she gets out of high school she can now work full time and pick up the best shifts.  She makes great money not only because she is cute and gets great tips but also because most of her income is in tips and she needs only claim a fraction of them on her taxes giving her a large income advantage over someone earning similar money but paying full taxes on them.

She has a choice, go to college or pursue some other career path that will take a great deal of her time and effort and make it difficult, if not impossible, to maintain her waitressing career.  She has to choose between taking a hit when she is young and giving up great income in return for the hope of better income later in life.  Waitressing will not continue to earn her more money, in fact her greatest income potential comes when she is in her late teens and twenties – if she gives that up and chooses to go to school during her biggest earning years she has no way to recoup those lost wages later in life if her chosen career does not pan out as hoped.

This is a major life challenge, especially for someone likely only seventeen or eighteen years old to have to make.  A cute, competent, friendly waitress can easily earn, the day they leave high school, more money than the average college graduate can, but can’t continue to earn more and more.  Giving up an income that allows for moving out, buying a car and being self sufficient so young is very, very difficult to do.  Other people at a similar age often have worked horrible jobs like dishwashers and look forward to college or an internship or some other means of moving into a more fulfilling career path.  They have no decision to make – there is nothing to “give up” that can’t be regained at any moment if college doesn’t meet their needs.

The end result is that it is harder for the cute waitress to obtain the lifetime of career advancements, raises and benefits that come from other careers where experience heavily builds upon itself only because there is such an attractive option offering far better front loaded benefits than a somewhat risky career and education option that offers back loaded benefits.  The availability of such a great job at such a young age can end up turning from a blessing into a curse.

]]>
https://sheepguardingllama.com/2010/11/the-cute-waitress-problem/feed/ 1
Is AppleTV the Next Video Game Console? https://sheepguardingllama.com/2010/10/is-appletv-the-next-video-game-console/ https://sheepguardingllama.com/2010/10/is-appletv-the-next-video-game-console/#comments Mon, 04 Oct 2010 18:07:22 +0000 http://www.sheepguardingllama.com/?p=6019 Continue reading "Is AppleTV the Next Video Game Console?"

]]>
Something happened recently in the world of video games.  Something sneaky.  Maybe something that was not even planned.

Over the last few years, this new product, called the iPhone and its calling-plan barren cousin the iPod Touch, have come onto the market with little or no thought to being a platform for video games and yet still, without any apparent effort, appear to have supplanted the Sony PSP as the second string handheld video game platform and, from where I sit, seem to be poised to rapidly overtake Nintendo’s DS platform in short order.  What is amazing is that no one seems to really discuss the iPhone as a video game platform.  The whole idea of playing video games on the iPhone seems to have just sort of snuck up on everyone.

Now, with little warning, the hand held video game landscape has dramatically changed.  The iPhone, because of its volume, screen quality, multi-functionality and rapid update schedule (when compared to traditional video game consoles) represents a serious threat to the way that video games have traditionally been handled for the hand held market.

Perhaps the paradigm shift has occurred simply because, unlike traditional hand held consoles, the iPhone earns its revenues via other channels and not through video game licensing.  So instead of working hard to make games expensive and distributing them through traditional sales channels, video games are cheap and downloaded through the same mechanisms that provide music, movies and other applications.  Internet distribution is a fraction of the cost of shipping cartridges around via UPS and warehousing them, securing them and paying an employee to check you out at the counter.  The infrastructure around gaming has been vastly improved.  And now, someone wanting a new game gets it instantly – not only during hours when the store is open and when you have time to get there.

Video game console makers can’t really compete with Apple from a hardware perspective.  Apple owns their stack, top to bottom, and spreads its resources amongst many products reducing the cost to produce any single one.  They make their own processors, their own operating system and all the other major components giving them a pricing advantage.  Apple is able to charge more for their products because they are not judged by the merit of being a video game platform but of being a mobile computing platform.  By being multi-purpose, the iPhone is able to deliver a better video game experience.

There is a hidden feature of the iPhone and its kin as well: public impression.  Let’s face it.  If you are riding the train heading into the office in midtown, playing your DS or PSP can be a little embarrassing.  Not that there is anything wrong with it but if you are a corporate executive trying to look the part it may not fit the image for which you are looking.  It also means carrying an extra device with you all day.  But using an iPhone as a multipurpose device means that people on the train can’t tell when you are playing Fruit Ninja or sending an email firing the COO for spending the day playing Fruit Ninja on his iPhone instead of working.  This video game ambiguity is a big win for the platform.  This platform is more lifestyle-oriented.

For years, it was predicted that the general purpose PC platform, always more powerful than the video game console counterparts of similar era, would overtake the video game console with the “next generation”, whatever generation that would be, and that people would hook PCs to their television monitors and stop using consoles.  That has not yet happened.  But surprisingly, the logic always used for why that shift was inevitable applied more thoroughly to the iPhone market than it did to the PC market.  The iPhone being closer to general purpose computing while still being a vertically integrated, tightly coupled device like a video game console.  Perhaps this blending of models was just what video gaming needed.

Given the surprising rise of the iPhone as the hand held video game platform of choice, should we then consider the AppleTV, iPhone’s television-attached cousin, to be a prime candidate for the future of traditional video game consoles?  The latest iteration of the AppleTV, version two, is based not on the Mac Mini like the original but on the iPod Touch sans screen and retails for just $99.  That means that, in theory, we are just a controller away from the AppleTV being able to play all of the iPhone games right on your computer!  This does not take into account the massive differences between touch screen control and whatever the AppleTV would use, but that seems relatively trivial in the grand scheme of things.

Today this may seem silly.  Clearly the AppleTV with its A4 processor is not nearly powerful enough to rival the major game consoles.  But as the Nintendo Wii has demonstrated, that is not always a significant factor in the the video game console market.  Market penetration, cost and multi-use functionality could outweigh processing power.  Realistically we are not talking about the current generation of AppleTV either.  No worries there, the AppleTV can iterate to version three before the current crop of video game consoles sees a replacement cycle themselves putting the AppleTV much, much closer in terms of capabilities.  The shared platform with the iPhone and low cost of acquisition and distribution makes it a perfect platform for casual gamers.

Perhaps the idea of the AppleTV as the next video game console seems silly.  In reality, I tend to agree.  It is, however, a very interesting supposition.  Perhaps, though, we should consider things one stage further.  At this time, Google’s Android operating system is reported to have taken over Apple’s iPhone both in market share as well as in consumer demand.  Android’s broader market appeal and greater choice of platform might make it a better candidate for gamers and multi-function use.  Rapid Android adoption could prove to be a “game changer” for the gaming market in a way that no one is truly expecting.

Maybe the biggest factors that will impact AppleTV or “set top Android” adoption over traditional video game consoles will be in appearance, power consumption and ancillary use.  Already my PS3 spends 95% of its time or more steaming DLNA or Netflix content.  But to run Netflix, its primary job, it requires a DVD be inserted.  A bit of a pain to always switch the disc for daily use.  The AppleTV does this natively – and does so while being small, unobtrusive and very attractive unlike the ridiculously large and silly looking PS3 and XBOX 360 products.  Ask the average home owner which device they would like their guests to see sitting by their television and I guarantee that the AppleTV’s aesthetic is a bigger factor than people tend to imagine.

The AppleTV might not be the future of console video games, but I expect that the iPhone / AppleTV platform and the Android will be playing a significant role in how the future of video gaming shapes up.

]]>
https://sheepguardingllama.com/2010/10/is-appletv-the-next-video-game-console/feed/ 1
Do IT: Testing the Waters https://sheepguardingllama.com/2010/09/do-it-testing-the-waters/ https://sheepguardingllama.com/2010/09/do-it-testing-the-waters/#respond Wed, 29 Sep 2010 05:04:57 +0000 http://www.sheepguardingllama.com/?p=5996 Continue reading "Do IT: Testing the Waters"

]]>
A question that I often get from people looking at going in to IT is how to decide, from the vast array of career options, what paths make the most sense.  IT is a massive field and the range of careers within the field is pretty wide and the differences that exist between different types of firms is very wide as well.  Few fields offer the dynamic range that IT does and this is both a blessing and a curse.  Few people entering the field really have a good idea of what they are going to want to do and can be lost simply because too many choices are available to them.

If you are coming to IT from nothing more than an interest in computers and technology you may easily find that the doors are just too wide open and there is no clear path upon which to set out on your IT career adventure.  Surely there are career paths beckoning you but they may be hard to identify.

There are, in my estimation, two basic ways in which you can set out to decide effectively where you want to begin your IT career.  The first is through academics whether formal or self-study.  This is the traditional approach and has its merits.  The academic approach allows one to sample many career tasks rather quickly and offers the benefit of creating academic credentials while doing so.  The disadvantages are significant, though, most notably that the sampling that you receive is often very much unlike what a real world job often entails and the insight gained may be skewed dramatically from reality.

The second approach is to do projects on your own attempting to mimic the field and, when possibly, move into entry level positions or internships where you will have an opportunity to sample many career paths.  Some companies actually have special programs designed especially for this purpose – to give young professionals, often recent college graduates, a chance to move through many entry level technical positions sampling each one for several months giving them time to settle in, learn the basic functions and get an appreciation for the challenges and rewards of different roles.

Of course, blending these two approaches is an option as well.  Taking just a few college classes, working on an entry-level industry cert and playing with several different technologies at home are all great ways to earn an early position or an internship so mixing and matching for your time, ambition and personality is advised.  The goal is to discover what career path holds the most interest for you and getting exposure is key.

Some IT careers like desktop tech, helpdesk and developer are highly exposed and moderately well known to people outside of the field.  Other career roles such as systems analyst, database administrator, network engineer, architect or application support may be a bit too abstract and uncommon for someone just entering the field to really understand.  The only true way to get a feel for many of these positions is to work in IT for some time and get exposed to their roles and experiment with their job tasks.

There is no one method that works for everyone.  Often people looking to enter IT have a good feel for whether or not they want to be in the standard IT support track or in the software engineering and development track but it is not uncommon at all for people to err on the side of development just to discover that “programming” is a thing that a lot of non-IT professionals say but seldom mean and that the imagined world of development is not at all what most people think of as IT.

While any advice should be left to an individual basis, someone with the time and resources to explore multiple options should carefully consider the benefits of taking a broad-stroke approach.  Local community colleges often offer some pretty good introductory courses at reasonable prices that provide part time students with access to professors, other students and some structured learning.  Even just two or three classes covering basic concepts like programming, web design, hardware support, databases or similar classes could go a long way to providing exposure to a variety of job options.

This knowledge can then be applied to self-study.  If databases are of interest, for example, then installing, configuring and tuning some database products might be a place to start.  A few good books, some free trial or open source software and you are on your way.  Once a good idea if this is a career for you has begin to take shape you can leverage this focused information into looking for an internship or entry level position that will, hopefully, provide some exposure in the area of IT in which you are most interested.

Once you start in IT it is generally pretty easy to move from one job role to another.  IT experience is often measured primarily in total experience time in the field and not in terms of experience within a specific job role.  Specific experience will be used to move vertically up within a specific job role but horizontal moves are common and expected.   Once inside the industry, getting a good look at the jobs performed by other roles and the way in which they work is easy and a better understanding of options can be acquired.

The most important piece of advice for anyone considering going into the exciting field of IT is, of course, get started right away.  IT is more accessible than just about any field so peruse the local college’s course offerings, swing by the local book store, read “Do IT” on SGL, pick up an extra PC, download some software… get started and see what you enjoy!

]]>
https://sheepguardingllama.com/2010/09/do-it-testing-the-waters/feed/ 0
Bones Season 3 Episode 7 Massive Amiga Blunder https://sheepguardingllama.com/2009/02/bones-season-3-episode-7-massive-amiga-blunder/ https://sheepguardingllama.com/2009/02/bones-season-3-episode-7-massive-amiga-blunder/#comments Wed, 18 Feb 2009 20:40:57 +0000 http://www.sheepguardingllama.com/?p=3582 Continue reading "Bones Season 3 Episode 7 Massive Amiga Blunder"

]]>
So today Dominica was watching Bones, season 3 episode 7.  She came down to tell me that they had put an Amiga from 1987 into the show and that I had to take a look.  Of course, no one in Hollywood bothers to check anything at all or to even state the obvious correctly.

They claim that the Amiga is from 1987, the same year that my family bought a Commodore Amiga.  The machine that they show is obviously a Commodore Amiga 1200 (A1200) which was made from late 1992 through 1996.  Almost a full decade more modern than what they are stating.  (To put this in perspective, they say that they are showing a computer used for little more than video games that was made when I was in mid-elementary school but show a high-powered 32bit graphics workstation that was still on the market in my third year of college!!)  But this is just the beginning.

The Amiga machine that they show, the black A1200, is sitting, unplugged, atop an ancient IBM XT that is an entire generation older than the Amiga.  Both machines are so famous and amazingly recognizable at once that it is extremely confusing to watch because it looks like exactly what it is, a mid-90s Amiga 1200 unplugged and used as a dust cover for a worthless, early 80s IBM XT (I learned to program on an IBM XT when they were no longer current in 1985.)

Then, the actors, who apparently aren’t familiar with how computers work and that they need to be plugged in, talk about the specs of the Amiga (an incredibly powerful 32bit workstation worth many thousands of dollars in the mid-90s) but instead quoted the machine has being powered by the pathetic Motorola 6800 processor which was never used in any computer to my knowledge but the series included the 6809 which was used to power the Vectrex home video game system (that Dominica’s family has) and the Radio Shack sold TRS-80 computers of the late 1970s.

They, to add insult to injury, the product a floppy disk that supposedly was used on the Commodore Amiga.  Now the original Amiga came out in 1985 and one of its major selling points was that they had left the legacy world of 5.25″ floppies behind and moved ahead, along with Apple’s Mac and the Atari ST, into the world of 3.5″ floppies which were more stable and had higher storage denisty and better overall performance and capacity.  This was extremely well known at the time.  It was the first fact that anyone would know about any of these machines.  The 5.25″ world included the old IBM compatibles, when they were still called that, the Apple //e and other ancient 8bit machines.  The original Mac, Atari ST and Amiga were 16 bit (but remember that they actually showed a 32bit Amiga that was about seven generations into the series and actually had a hard drive installed.)

Since the Amiga didn’t have a 5.25″ floppy drive, they stuck the floppy into the IBM XT!  Watching the show without sound you can’t even tell that the Amiga is supposed to be being used.  It is only mentioned in the dialogue and the show actually uses the IBM.  Visually the show is completely about the IBM XT but audibly the show is a mismash of dialogue that sounds like a five year old attempting to sound like they know something by spewing gibberish with authority.

Then, they show this IBM XT (a device which normally came with a monochrome green screen) that displayed 80 character columns of text playing a modern, late 90s, 3D rendered video that had more colors in it than the IBM could display (which was like 16), higher resolution than the IBM could produce (by orders of magnitude) and all of that before having it do graphical rendering that was still out of reach of most home video game enthusiasts by 2000.  They made the implication that there has been no hardware advancements since 1984 (when the XT was popular) and that the only differences between then and now is that programmers are smarter now and know how to write 3D games!

What really amazes me is that all of the people involved in producing an expensive show like Bones from writers to producers to actors to stagehands, prop people, etc.  Not one single person figured out that the scene was so wrong as to be confusing to the most casual observer.  How can so little thought be put into a show so expensive to make?  How can so much work be involved in making a scene so inaccurate?  Just having the Amiga in front of them for ten seconds, even if they had never seen a computer before, would have filled them in on what cables to plug in and what type of floppy the would need for the scene.  And the year or manufacture is probably printed on the back.

An eight year old with Google who had never heard of Commodore, Amiga, IBM, floppies, etc. could have researched all of this for them in minutes.  Most of the people working on these shows are older than eight, I would venture to guess, and probably many of them older than me which means that they should be exceedingly aware of all this already without any need for any research at all.  They lived through these eras.  They watched the 5.25″ floppy fade away in 1984.  They should remember computers that only had green screens.  They should know that sitting one computer on top of another looks weird and that everyone would see two computers sitting there and notice that the one they mention isn’t even plugged in and that the floppy was placed into the wrong one.

Seriously people.  Hollywood is so sloppy, why do we watch this stuff?  Why not film Kindergarteners putting on shows at school?  At least then we have some guarantee that those kids at least attended half a year of Kindergarten.  I can’t be so sure about the people making these shows.

]]>
https://sheepguardingllama.com/2009/02/bones-season-3-episode-7-massive-amiga-blunder/feed/ 4
Scope Creep in Software Development https://sheepguardingllama.com/2009/01/scope-creep-in-software-development/ https://sheepguardingllama.com/2009/01/scope-creep-in-software-development/#respond Sat, 10 Jan 2009 15:13:23 +0000 http://www.sheepguardingllama.com/?p=3366 Continue reading "Scope Creep in Software Development"

]]>
Few words strike fear into the hearts of software development managers and developers than scope creep. Long the bane of the development industry, scope creep has affected almost everyone who works on software – even those who work for themselves but especially those who report to a client providing software specifications.  The perennial question in software development management has always been “How can we manage or eliminate scope creep?”

First, let us define scope creep for the purposes of this discussion.  Scope Creep refers to the tendency within software development projects for the scope of a project to slowly grow over time through changing requirements and specifications.  Scope Creep is a disaster for software projects because it makes budgeting nearly impossible, predictions of completion nearly always incorrect – often by astounding margins, makes determination of project feasibility questionable as the development team may not be aware of technical challenges that will be faced due to expanded requirements and can even cause a project to “scope thrash” in a state where “feature complete” is never clearly defined and a project never completes because the current state is never accepted as finished even if all existing features work correctly and current scope is far beyond the original requested scope.

While thinking about this article it occurred to me that there are two schools of thought on scope creep: those who believe accept that scope creep is an inherent artifact of the software development process and those who believe that it can be eliminated through project management practices (rather than simply mitigated.)  But that is not the whole story.  In reality, I realized, these two schools of thought are products of two higher issues: that some people believe that big design up front (aka BDUF) is a necessity and those who do not.  Even this, though, is not the whole picture.  At a higher level than this is the old argument as to whether software is an engineering discipline (engineering efforts require a small amount of design followed by a large amount of manufacturing or construction) or a design effort (where the design requires almost all of the effort and duplication is a trivial afterthought.)

Disclaimer: I am well aware that certain small segments of the software development community work on project types which absolutely require BDUF and locked requirements such as nuclear power station control systems or control systems for the space shuttle.  This represents a niche software market and falls outside of a more general discussion.  However, many projects of this type are only believed to be locked into BDUF processes because of client requirements and not because non-BDUF methodologies would necessarily produce less stable or reliable software.  Statistics indicate that we may, in many cases, be increasing risk and failure rates by requiring these projects to avoid the best practices of the industry in general.

The logic goes like this: If you believe that software development is an engineering task then you assume that you must complete all design before “construction” begins.  This requires that all or almost all design be completed prior to the beginning of construction or manufacturing.  This is the origins of big design up front.  This, in turn, requires that scope be carefully controller as BDUF has means of managing changes in scope and small changes can have catastrophic effects on an engineering project.  Imagine a bridge where, after construction has started, the people who have request the bridge decide that they choose the wrong location or that its load bearing properties need to be changed to carry trains in addition to automotive traffic!

One of the key differences between software development and traditional engineering disciplines in that traditional clients understood that if the products was already made or partially made that the cost of retooling for changes would be staggering.  Even a small change to a car (make it 2 inches longer) would send the entire design team back to the drawing boards and all aspects of the engineering portion would begin again and safety certifications, marketing, etc. would all have to retool almost as if they were starting from scratch.  In software, though, clients do not see the final product as immutable and do not understand the impact of changes.

Scope creep in software development is deeper than a misunderstanding between clients and software engineers, however.  In specifing a traditional engineering project it is generally fairly simple for the person requesting the product to provide meaningful and useful specifications.  Here is an example, I can actually order a bridge from the engineering firm by stating: the location at which I need the bridge, the carrying capacity of the bridge (two lane road, four lane road, etc.) and any special features (one walking path on the side, two paths, open sides for a view, etc.)  The rest of the bridge design is carried out without my input because what do I know about designing a bridge?  In some cases there might be several designs submitted, all which meet the specifications but with wildly different looks, from which I can choose one to build.  Once planning for that build has begun there will be cost involved with me changing my mind for obvious reasons.

Software engineering is not like this, though.  In bridge building (or car manufacturing or whatever traditional engineering discipline you wish to imagine) the people physically putting the bridge or car together make absolutely no decisions about what parts to use or how to assemble them. They are provided a design that tells them every nail, rivet, girder, component, material type, weld type, etc.  In software, the architect provides only a high level design and the software engineers or developers who actually create the software continue the low level design process choosing the design, components, structure, etc.  So the design process continues until completion, at least a a low level.  So in software everyone is an engineer and no one is a construction worker (or factory worker.)

This means that design is always underway no matter what we do.  If software developers were not doing any design and simply assembling code as specified by a higher level designer then that piece of the work would be automated simply and the higher level designer would “become” the developer in question.  That may sound strange until you think about high level languages like Java or C# .NET with large support libraries, application platforms like JBoss or Rails, GUI designers, toolkits like JQuery or Dojo, etc. all which come together to eliminate many routine design decisions for most projects allowing developers to switch from writing extremely low level code to focusing on more design issues and domain specific problems.  The “moving up” of the designer is a constantly occurring process that year by year makes the average developer able to do so much more than they could in the past and makes bigger and more complex software projects possible.

Given that we know that scope creep can and will happen to most projects we have three choices: due nothing and see what happens, manage and mitigate scope screep (the BDUF approach) or “embrace change” and build scope change into the ongoing design process (the Agile approach.)  Obviously, doing nothing, is a recipe for disaster and should be discounted.  Only the naive take this approach.

Managing and mitigating scope creep is a challenging and difficult project management endeavor.  This generally requires a large amount of client education and heavy contractual agreements limiting the client’s ability to make changes, to accept a reworking of scheduling and budgeting with each significant request and to lock client’s to a design at some point so that no further changes may be made.  This approach can work and often does but its failure rate is one of the most significant contributing factors to the high percentage of failed software development projects throughout the industry.

This approach forces clients to make many decisions at the beginning of a project at the point when the least is known about it.  It does not address changes external to the project that may force the need for scope change upon the project.  It also risks clients becoming disatisfied with the project partway along and finding it easier to pull out and start again than to rework what already exists.  Even when a project does complete it risks being outdated, poorly specified and at lower than anticipated relevance.

Moving to an Agile approach, where scope creep is accepted as being inherent to software development, things work very differently.  In most Agile practices there is some design up front but the idea is to design only those portions very unlikely to change and only to design enough to provide the basic foundation and to address any hidden problems that could arise but would have been discovered through an up-front design process.  Obviously no project will work well without any design at the beginning.  How would you even know where to begin?

Agile does not dispute the need for design but, in fact, is designed around the idea that the design is so important that it can’t be determined at the outset and left as-is.  Agile takes the approach that the design is far more important than that ands builds design into every aspect of the development project.

A moderate amount of design is done from the beginning.  Then design, both low lever and architectural, continues throughout the project.  This approach moves design decisions from the time when the least is known about the project to the time when the most is known.  It builds in flexibility so that as the client or business requesting the software learns how the software can and will work they can learn more about how they will use it or expand what they need it to do or even reduce features that they later feel are unnecessary (although we all know how unlikely that is to happen.)

With many Agile processes, such as Extreme Programming, there is a concept of keeping the project in a continual “shippable” state in which everything that has been implemented is currently working and functional.  In this way, at any point, if the project budget for time or money runs out the project is at least usable in some form even if not in the form originally envisioned.  In this manner, the scope starts very small and grows with each project iteration.  This is not creep but a built-in function of the development process and very much intended.  In this way, the client or business can decide to use the software at any time while allowing for growth and advancement to continue as well.  The project can peacefully continue until the client feels that additional features are no longer necessary.  There is still a risk that the client will never call an end to the addition of features but since the project is continuously shippable or usable the risk involved in not reaching the “end” of a project does not exist.

An important aspect of Agile methodologies is that features are prioritized and completed discretely on a priority basis.  By addressing features purely by their priority we can assure that the system always exists in its most usable state for the amount of time that has been given to it.

Both approaches to scope creep management have their merits.  The risk of the traditional BDUF approach is that it is often used to shift responsibility from the software development team to the client.  This is handy when you are the developers and have no long-term interest in the viability of the software but are only concerned about meeting contractual obligations (whether to a third part or to an internal business unit.)  The BDUF approach is about solidifying responsibility with the client even though they are not software designers.

The Agile approach accepts that the client or business requiring software is not a software engineer and needs to work with the team to produce something that will meet the business needs.  This approach is about meeting business needs not to lock the customer into paying for whatever original requirements were specified.  Locking in customers may sound great to some outsourcing shops but businesses would be well advised to look elsewhere for companies whose interest lies in making great and working software not in meeting contract requirements alone.

Businesses looking to generate software internally have no reason to support the use of BDUF which operates in the disinterest of the business itself.  This ideology pits the software development team against the business when, in reality, they should be aligned to work towards mutual benefit.

When deciding on a scope creep management strategy you must ask yourself, “Are you going to embrace the unknown and use it to your advantage or are you going to ignore it and continue to make software designed before all current information was gathered.”  Businesses whose processes are Agile have a distinct advantage in the marketplace as their products are based on current market needs and pressures and not based as a response to pressures that existed when the project was first proposed.

]]>
https://sheepguardingllama.com/2009/01/scope-creep-in-software-development/feed/ 0
The Perfect Development Environment Manual https://sheepguardingllama.com/2008/11/the-perfect-development-environment-manual/ https://sheepguardingllama.com/2008/11/the-perfect-development-environment-manual/#respond Sun, 16 Nov 2008 19:08:14 +0000 http://www.sheepguardingllama.com/?p=2933 Continue reading "The Perfect Development Environment Manual"

]]>
This paper was written as my final paper for my graduate Process Management course at RIT.  The paper is based on a semi-fictional software as a service (software on demand) vendor which develops its own software and maintains its own development processes.  This document was to be formatted somewhere between a handbook and a memorandum to the CIO from the Director of Software Engineering.  So the document is somewhat technical and not all terms are explained.  Because of the target of the paper this document is a mix of theory and everyday practicality.  For example, at times we will discuss development methodologies but at other we discuss actual hardware specs that will be out of date almost immediatley.

To simplify things, the company in question is simply CompanyX and their main product is ProductY.  This document was originally a Word doc which I had to paste into Notepad and then into WordPress and then manually edit to restore formatting.  So please forgive any formatting mistakes as it was a lengthy manual process.

Overview

As requested, I have assembled a body of knowledge and best practices in process management with which our organization may promote healthy, productive software development and software development management.  Because CompanyX focuses solely on software as a service (i.e. “on demand”) products, many traditional approaches to process and project management do not apply or must be applied with an eye towards modifying their principles to work most aptly within CompanyX’s corporate framework.  This document is an attempt to distill the general body of knowledge available in this area and to modify it so that it is applicable and usable within CompanyX.

As you know, CompanyX’s approach to software development has always focused around forward-thinking, progressive hiring practices which aggressively seek to hire the absolute best IT professionals and extremely agile development practices designed to leverage the skill base of our developers most effectively while reducing overhead created in other organizations necessary to manage mediocrity in the environment.  Because of these practices it is necessary for any software development discussion within the CompanyX context to take these into consideration and to therefore alter generally accepted industry norms and practices to reflect this.

The basis of this document is not to enact change unnecessarily but to identify potential areas for improvement, to find areas which are weak, missing or poor and to codify practices which are working and should not be changed.  This document is very timely as CompanyX is currently undergoing a change from a single-focus medical facilities software firm to a multi-focused firm with a financial software group and internally managed hedge fund.

Current Development Status

To begin our discussion, I would like to look at the current state of the CompanyX development environment to establish a baseline against which we may compare our proposal of best practices and design.
CompanyX has two key datacenters, Chicago (internal and development) and Houston (production and Internet facing / non-VPN applications.)  Houston is in the process of being relocated to San Francisco.  A third datacenter, dedicated to backup storage and warm recovery needs, is being planned in Philadelphia.

At this time, CompanyX employs only a handful of developers most of whom do not need to directly collaborate as each works in a highly segregated technology niche which creates a situation of relative exclusion simply by its nature.  All CompanyX developers work from home offices provided by CompanyX using CompanyX workstations, software, networking hardware and telephones (when appropriate.)

Collaboration is handled through the secured corporate XMPP and eMail servers as well as over VoIP.  Remote desktops are visible using the central NX remote desktop acceleration system giving all employees access to one another on the central network.  Centralized documentation is handled through the corporate wiki.  Source control is handled via Subversion hosted at Peoria Heights – closest to the development servers.
Each home office has a hardware firewall with IPSec VPN acceleration to make networking to CompanyX’s Peoria Heights office transparent and simple.  Files can be transferred directly via CIFS or NFS from the central filers.

The main development platform at this time is Visual Studio 2008 using C# 3.0 and ASP.NET 3.5.  Visual Basic and VB.NET has been eliminated after years of weaning.  SQL Server 2005 and MySQL 4/5 are the key database platforms.  PHP is in use for some internal applications.

Given CompanyX’s current small scale this environment has proven to be quite effective, but we must realize that as growth and diversification occurs it will become increasingly necessary to add more structure and communication methods in order to keep projects moving fluidly.

Physical Location Issues

The first issue in development environments that we will address is physical space and location issues for the development teams.  Conventional wisdom says that the most practical means of getting maximum performance from developers is to locate them physically near to one another to facilitate rapid and smooth communications.  The need for rapid communications, however, is in direct conflict with the need that developers have for quiet and uninterrupted time for development to allow them to efficiently enter and maintain “the zone”.

“…knowledge workers work best by getting into “flow”, also known as being “in the zone”, where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done.” – Joel Spolsky, Joel on Software

In weighing the benefits of high level communications versus the benefits of isolation, flexible time, flexible location, travel, etc. we have determined that the cons of collocated development teams outweigh the benefits in our case.  Our approach, at this time, is to continue to pursue a completely work from home environment encouraging extremely flexible schedules allowing developers to maintain “the zone” for longer than usual durations while encouraging extended downtime when “the zone” is not obtainable.

Physical Environment

Regardless of the physical juxtaposition of developers a superior development environment is a necessity.  Following the recommendations of Joel Spolsky, a recognized expect in physical development environments, we intend to make Herman-Miller Aeron chairs available for all developers along with two 20.1” LCD monitors as a minimal, standard setup.  Additional monitors and choice of development platforms (Windows, Mac OSX, Linux, Solaris) are available at the request of the developers.  Also, all developers have access to commercial grade Bush desks that can be shipped directly from or picked up from Office Max.  These desks have surfaces that are very conducive to heavy mouse usage.

Communication Tools

Given that intra-team communications are of the utmost importance but recognizing that we are unable to effectively co-locate most of our staff we must turn to technology to mitigate this problem as far as is possible.  This is a challenge faced by all teams even when they have the luxury of physical proximity.  We believe that we hold an unapparent advantage in this area because we are acknowledging communications as a shortcoming up front and tackling this issue seriously rather than assuming that communications is handled vis-à-vis natural proximity.

We have a number of tools at our disposal that should function well in the role of providing a solid communications infrastructure for CompanyX.  We have many of these tools in place already such as a very successful email system and groupware package that handles basic communications both internally to the company and externally to clients and vendors.  The system is secure, reliable, readily accessible both inside and outside of the company, is tested to work well with BlackBerry and iPhone devices, etc.  We have team aliases set up to encourage team collaboration in this form.

Our email system also includes a simple document management system which has not proven useful for team collaboration but has shown to be very effective for individual user note-taking and reference information rather than storing information directly in an email format but still allowing this data to be accessed easily through the convenience of the email interface.

Group calendaring is currently proving to be a very useful addition to the eMail and groupware system.  Calendars are easily viewed by interested parties and teams can select calendar views to allow them to see the available and busy times of everyone on their team all at once in a single calendar view which is very effective for ad hoc planning and collaboration events.

In addition to asynchronous, archival communications available via eMail, CompanyX focuses heavily on the use of the XMPP/Jabber protocol for instant messaging via the OpenFire Server product.  This tool has worked very effectively for us in the past and our plans for the future involve a continued investment in this technology and an advancement in our client-side architecture with which to support communications. Currently we have a Java-based client, Spark by Ignite RealTime, that serves very well for general use.  We have begun testing an Adobe Flash based client that is available via a web page that may prove to be very useful for staff who need casual instant messaging access from machines where Spark is not installed.  It is our goal to make secure instant messaging access simple and ubiquitous so that staff are available to one another, in real time, whenever possible.

Often staff is required to be live on the instant messaging infrastructure while located in a secured office location that does not allow direct XMPP/S communications to the Internet.  For these users we have implemented an SSL Proxy / VPN product, SSL-Explorer, which allows them to access our applications such as eMail and XMPP even under the harshest network conditions.

Recently, CompanyX has moved from pmWiki to DokuWiki and has begun focusing heavily on using the wiki format to support as much internal development documentation as possible.  The wiki application is ideal for much of development communication because it is easy for any collaborator to create necessary documents, all contributors can edit their appropriate documents and staff requiring information have a central, searchable repository in which to perform searches for necessary information.  Wiki documents are automatically versioned making it trivial to track changes over time and to return to previous versions when necessary.

The wiki format offers many benefits that would be considered non-obvious.  DokuWiki allows staff to subscribe to wiki pages in which they have an interest getting automatic change notifications, either via eMail or via RSS/Atom feeds.  Through this subscription system the people who are responsible for or dependent upon a particular set of data can find out about changes immediately and react accordingly even if the person modifying the document is unaware of which parties may be affected.

The wiki format is useful for many types of documentation with standards documents, best practices, guidelines, project initiation documents such as the project charter and use cases being among the most effective.  The wiki is most useful as a supporting architecture for use cases which are the bread and butter of our design documents at CompanyX.  Their text basis and ongoing modifications lend them to be ideal candidates for wiki-ization.

Not all document types lend themselves well to the wiki architecture.  Rarely changing documents, end-user printable documentation and UML diagrams, for example, are more appropriately handled through a richer format such as Microsoft Word, Microsoft Visio, Umbrello and OpenOffice.org Writer.  To support these file formats a more traditional content management system is recommended.  The two products that are leading, at this time, in that area are Microsoft’s SharePoint Server and the Alfresco Content Management System. Both of these products often portal functionality so that they may function as a central starting point for staff when searching for content as well as document repositories with interfaces to traditional file stores.  Both may additionally be used as supporting frameworks for module application interfaces.

Voice communications remain very important, especially for detailed technical discussions in real time.  In order to support fluid and cost effective voice communications CompanyX will implement an internally hosted Voice over Internet Protocal (VoIP) system.  VoIP compares favorably to traditional voice technologies with many advanced features available such as conference calling, call forwarding, follow me service, voice-to-text support, auto-paging, voice to email interfaces, etc.  The cost of VoIP is also very low compared to traditional voice technologies.  Because VoIP systems are so cost effective, CompanyX has the opportunity to extend the internal corporate voice infrastructure into the homes of all staff providing a seamless environment so that neither internal staff nor external clients can easily tell when a given member of staff are located at home, on the road or in the office.  This flexibility and transparency is important in supporting the disparate office environment required by CompanyX.

Unified Communications platforms, such as those made popular by Microsoft, Cisco, Nortel and Avaya, have many advantages over our recommended approach of maintaining and moving forward with separate eMail, IM, VoIP and other communication products.  By choosing to work with separate products CompanyX has the benefit of choosing “best of breed” products rather than a single, unified product that does not necessarily need to compete head to head with industry leading products.  More importantly, however, Unified Communications or UC fall under a legal grey area, at this time, and may subject loosely regulated communications such as voice to long term scrutiny and archival penalties of communications such as eMail.  Because of laws within our primary jurisdiction, the United States, it is important that UC be avoided until legal hassles surrounding its implementation are resolved conclusively.

Development Methodologies

Development Methodologies are critical underpinnings of software development.  CompanyX has always focused on highly iterative and extremely agile development practices focusing heavily on those methods and practices that lend themselves first and foremost towards heavy innovation, rapid adaptation to the market and quick feature development.

Because of our needs and strategies, CompanyX has turned to highly modified Agile methods drawing inspiration from popular Agile methodologies, especially eXtreme Programming (XP) and SCRUM.
CompanyX has adopted the XP use of the “user story” as the primary measurement against which development is performed.  Stories are not completely “user” related and can sometimes be “developer” stories to allow simple scheduling for changes not visible to the end user.  This can be misleading to XP adherents as stories are designed to be visible to the client but CompanyX’s true end client is, in a fashion, the development organization itself and so we treat stories in an unusual way at times.

In order to support the continuous development processes within CompanyX any standard methodology must be adapted heavily as all major methodologies rely upon release cycles, sprints, etc.  CompanyX uses Continuous Sustainable Velocity as its model and does not have scheduled release cycles.

A development team “burn down chart” or backlog system is used to schedule upcoming stories so that priorities are maintained even without a set schedule.  The backlog is continuously managed by the development lead and the analyst lead / project manager to keep the stories organized by priority but with continuous reference to their weight (estimated time for completion.)  The weight may be used to determine that a small feature or minor change might need to be popped to the top of the backlog stack because it represents better story velocity.

The goal within CompanyX is to get new features designed, developed, tested and into the hands of our customers as quickly as possible.  Because we release all of our software directly to our own servers and deploy internally making the software available as a service to our customers we focus on releasing as features are made available and do not need to announce most new features or changes.  We try to roll out changes gradually on a regular basis so that customers always have some new fix, feature or performance enhancement.  To our customers our system is very much alive – a system constantly under change and development.

Extreme Programming

CompanyX’s methodology also follows the XP principle of simplicity and “You Aren’t Gonna Need It” or YAGNI.  This principle, which states “Always implement things when you actually need them, never when you just foresee that you need them.”, is important to allow CompanyX to produce software more rapidly while delaying potentially complicated and unneeded work until we are sure that it is required.

CompanyX loosely follows Mehdi Mirakhorli’s Rules of Engagement and Rules of Play for XP which I will list and explain:

  • Business People and Developers Work Jointly: CompanyX follows this very closely but, because CompanyX does not have direct customers but is itself the primary customer of the software products which it then hosts as a service, CompanyX produces an industry trained business analyst who represents CompanyX as an entity as well as theoretical business customers.  This analyst is available full time as a “business person” with whom the developers can go for clarification, expansion, etc.
  • The Highest Priority is Customer Satisfaction: This means primarily that CompanyX is able to host and manage the application and second that CompanyX’s SaaS customers are thrilled with the quality, features and innovation of our services.
  • Deliver Working Software Frequenty: In XP, short term timeboxing is used to encourage the delivery of new features and updates on a regular basis.  At CompanyX we attempt to roll features and fixes out as quickly as they become available on a continuous basis.  Delays are only introduced to this system when multiple changes may be available extremely close to one another or when system changes are considered risky.
  • Working Software: Our goal is to produce working software.  No other deliverable has the priority of working software.
  • Global Awareness: The development team must tune its behavior in order to support the overall needs of the organization and its customers.
  • The Team Must Act as an Effective Social Network: Honest communication leading to continuous learning and an emphasis on person-to-person interaction, rather than documentation, empowerment for developers and analysts to do the work that they need to do and to make the decisions necessary to create and innovate and to level the political playing field so that all team members are available to one another and seen as peers.
  • Continuous Testing: All code is continuously tested through all reasonable means.
  • Clearness and Quality of Code: Code should be clean and elegant resisting duplication and obfuscation.  When possible, code should adhere to the precepts of Software as Literature.
  • Common Vocabulary: The team, as well as the organization, should share a common vernacular to reduce miscommunication and increase the speed of development.
  • Everybody Has the Authority: Everyone on the development team has the authority to do any work, to fix any problem and two or more people are always available who understand any given piece of the system.

In addition to the “Rules” of XP which CompanyX follows almost exactly, there are twelve XP Practices, as set down by Kent Beck, which we follow less closely.  The practices which we do not believe are beneficial to us include pair programming and test driven development (at all times.)  We feel that these practices are too heavy for CompanyX’s needs.

The XP Practices which CompanyX should be adhering to include: the Planning Game (for CompanyX this is played between the development and analyst leads), Whole Team (business is always available for the developers), continuous integration, refactoring, small releases (CompanyX uses continuous releases), coding standards, collective code ownership, simple design, system metaphor (business and development joint metaphor for system functionality creating a meaningful, shared namespace) and sustainable pace (CompanyX should never approve pushed overtime.)

SCRUM

CompanyX also takes concepts from the SCRUM development methodology but not as heavily as from XP.  SCRUM’s focus is more upon agile software management then upon software development itself as is the focus of XP.

SCRUM separates stakeholders into two key groups, pigs who are completely invested in the project and chickens who have an interest in the project but are not necessarily totally committed to it.  CompanyX follows this tradition but without the humorous naming conventions.

Within the pig group of stakeholders, SCRUM recognizes three key roles: product owner, ScrumMaster and the Team.  The product owner is the voice of the customer and makes sure that the team is working on features and in a direction most beneficial to the business.  The ScrumMaster, sometimes called a Facilitator, whose role overtakes the traditional role of the Project Manager but, instead of leading the team or “managing” them the ScrumMaster’s job is to interface with the organizational management and to make sure that outside influences do not get in the way of the team’s ability to work.  Finally, the team itself includes the developers, designers and others who perform the “real work.”

CompanyX organizes its teams similarly.  The Product Owner role is played by the business analyst assigned to the project who represents the clients or potential clients.  Since SCRUM is generally organized around contract software development their role is more aligned with a contract customer while CompanyX’s customers are potential industry customers and so the business analyst’s role is to have a firm grasp on the business of the industry as a whole and to represent them in a way that no one person within the industry may be able to do so themselves.  These roles are extremely similar.

The ScrumMaster role is less clearly defined within CompanyX.  As CompanyX does not actually use SCRUM we do not require someone to manage the SCRUM process.  A manager is selected to act in the Facilitator role, but we use the term Project Manager, whose role it is to keep the team able to work.  The Project Manager is not over the team and does not manage the team but manages the project, interfacing with the sponsors, providing data, keeping the team able to keep working by not being bothered by the overhead of politics.  Often this role does not require a full time person and this role is either played by an executive sponsor or by the business analyst.

CompanyX’s concept of a team aligns very well with the SCRUM idea of the team being a self-organizing entity that needs no external management.  Like SCRUM, CompanyX keeps teams small so intra-team communication is fast and fluid.  Additional overhead of formal meetings or project management processes are not necessary.

Much of CompanyX’s concepts of project task organization come from SCRUM.  CompanyX’s teams utilize the product backlog and burndown chart systems for providing straightforward and simple project planning, scheduling, etc.  As we do not use Sprints themselves the idea of the Sprint Backlog is missing and, as we see it, unnecessary.  CompanyX does not require schedule detail of the level that is generally provided by the Sprint backlog and so this is seen as superfluous overhead that should remain out of the picture.

Unlike both XP and SCRUM, CompanyX does not generally encourage customers to become a part of the development process.  Ideas and suggestions are solicited from customers at times but these are used to provide vision and guidance but almost never day to day project goals or scheduling.  We choose this model because CompanyX delivers a software product that is then purchased by unknown customers.  It is the job of the business analyst to understand the needs of the potential customer pool and to anticipate their needs better than they would be able to do.

After eight years of developing our methodologies based upon our studies of the software management and development field and upon our unique software development models and delivery mechanisms we feel that our blend of light management, informal processes and extremely agile methodology is quite effective and does not constitute unnecessary overhead while allowing the team members the ability to communicate and work effectively together.

Core Technology Competencies

Due to CompanyX’s relatively small size it is important that we focus upon a core group of technology and platform competency areas in which we are able to specialize and develop a mature practice in which we are able to cross-train and mentor.  The core areas in which we are currently focused include Microsoft’s C# on the .NET platform, Java with Spring and Hibernate on UNIX (both Red Hat Linux and Sun Solaris) and Ruby on Rails.  C# and .NET have long been our core platform replacing our previous VBScript/ASP platform based on Microsoft’s venerable DNA model.

C# and Java remain our most critical platforms providing the basis for much of the code which we currently have in production as well as much of the code that currently exists in our pipeline.  These two languages and library collections are also, by far, the most well known and well understood platforms by our current development teams who have used these technologies extensively both within CompanyX and without.

In the past year, Ruby and the Rails web application framework have begun to appear as an important additional platform choice for CompanyX.  Ruby on Rails is targeted at rapid web application development and often aligns well with highly agile methods like those used at CompanyX.  Ruby is well suited to the average database intensive but computational non-intensive applications which we are apt to produce.  Ruby’s high level of abstraction and powerful library system is highly advantageous for building complex business applications very quickly.

For financial applications and other high computational performance needs situations the use of C or, less likely, C++ may be critical for small pieces of code requiring extensive tuning.  This may not be a correct assumption, however, has the Java platform is extremely performant, especially in numeric computations, and may prove to be more than adequate.  Having a C compiler tested and approved would, most likely, be important to ensure that necessary tools are ready before they are needed.  Our financial development lead is currently evaluating two compiler packages in order to find an appropriate choice for our platforms.  As we are standardized on AMD Opteron and Sun UltraSparc processors only the GCC (AMD Optimized) and the Sun C Compiler (Sparc Optimized) are being considered since both outperform the rather expensive Intel C Compiler for our chosen hardware platforms.

Future Technologies

In the future, having access to a readily available C platform may be critical as CompanyX investigates the NVidia GPU-based CUDA technologies for massively parallel floating point computations.  The CUDA libraries and drivers necessary to access the NVidia Tesla GPU hardware are currently callable only through C which may necessitate the use of this language.

Under investigation, potentially as production development platforms but more often as learning and exploration opportunities, are a number of languages and platforms.  We believe that it is important to constantly seek out new languages, approaches, platforms and techniques so that we are prepared when new challenges arrive.  Experimenting in new languages and platforms is beneficial outside of potential production usage as well.  Having an opportunity to work in a diverse environment is more exciting for developers and also contributes to growth.

Currently we are experimenting with the use of Clojure (a Scheme-like LISP language that runs on top of the Java Run Time), Groovy (a Ruby-like language that shares syntax with Java and runs on the Java Run Time) and Scheme (a LISP variant.)  Highly potential other languages for our use include Scala, Boo, J# and Haskell.  By working with new and groundbreaking languages we expand the way in which our developers think helping them to approach problems from a unique perspective and to discover new and different solutions.

Design Tools

Standardized design tools are important in providing consistent communications and documentation from the design process to the development process.  CompanyX chooses to focus on a limited set of highly effective design tools that deliver us the maximum of communicative power without unnecessary overhead and the creation of design documents that are unlikely to be used and may rapidly become outdated and ineffectual.

We have chosen to use the wiki format as our preferred mode of design document sharing.  Wikis are easy to implement and easy for all team members to access and modify.  Collaboration happens fluidly and naturally on a wiki and built in change control mechanisms protect against data loss even when major changes to documentation are made.  In the past we have used pmWiki but found the system to be lacking in many respects.  After some searching we have, in the past year, migrated to DokuWiki which is much better suited to corporate-style documentation needs and authentication options that allow us to integrate it into our environment effectively through LDAP integration which allows it to authenticate either to our UNIX systems or to Active Directory and Windows.

The wiki format has proven to be very effective from its initial usage at CompanyX and we have begun to use it in many areas including new hire documentation, helpdesk information, application and package instructions for desktop applications, scripts for deskside build and migration tasks, general company data, etc.  Our goal is to move as much information, except for financial control data, into the wiki format as the format will allow both because we find this system to be very effective for our small firm but also because by putting more information into a single repository we find that usage rates increase and more employees are able to find more types of data more quickly.

At CompanyX we have chosen the Use Case as our primary form of design documentation.  Use Cases are simple for all parties to create, consume and comprehend and provide the most important piece of documentation needed by the development team – a clear understanding of the customer needs.  Use Cases are divided into business use cases and system use cases and are designated as each within the wiki.  Business use cases come from the business analyst and include business process modeling.  System use cases come from the system analyst / architect and involve technical, system process models.  Comprehensive Use Case documents are a long term project resource since they reflect the way in which the system will be used or the way in which we believe that it will be used.  As such this documentation is much less likely to become outdated than more technical documents are wont to do.  In some cases, Use Case information close to a decade old has been beneficial in passing long term project knowledge between project iterations.

Beyond the use of Use Cases, which are often mentioned in conjunction with the Unified Modeling Language (UML) but are not actually a part of the UML specification, CompanyX makes selective use of UML diagrams and models for more technical aspects of the system design process.  The graphical nature of UML diagrams makes them naturally more difficult to store, catalogue, search and retrieve.  Because no system, of which we are aware, is effective at dealing with a store of UML diagrams we have chosen to use the wiki system again as a UML diagram repository alongside the Use Cases.  In some rare cases a UML Use Case Diagram might be attached directly to a Use Case but more likely a separate category of pages is created with carefully organized diagrams arranged as images on the wiki page with appropriate text on the page making the page as searchable as is possible.

While our use of UML is less extensive than Use Cases we find the more technical diagrams such as class diagrams and object diagrams to be the most effective.  Other types of diagrams would be used only rarely when a special circumstance called for a high degree of clarification.

UML diagrams are created using the Umbrello module of the KDevelop Developer’s Suite.  This package is available on all standard CompanyX Linux desktops.  Umbrello was chosen as the best UML modeling tool for our purposes after auditioning  several other tools such as ArgoUML, Kivio, StarUML and Poseidon.  For some special case needs Microsoft Visio may be required.  This is available on all CompanyX standard Windows desktop builds.

In addition to traditional design tools, CompanyX, being completely non-colocated, requires special attention to the needs of team communications in a collaborative development environment.  In order to support these forms of communications, which are especially critical during design phases, we have traditionally implemented instant messaging conference rooms and advanced voice telephony functionality that includes extending the organizational voice system to all offices and employee homes.  These tools are key for rapid communications but play only to synchronous or near-synchronous communication needs and do not address long-form discussion, discussion tracking or decision archiving.

To support these longer duration needs, we are suggesting that an online forum application be implemented that will allow teams to pose questions and ideas and invoke response and discussion both from the team itself as well as from the community.  Many open source forum and bulletin board packages exist to provide this functionality.  We have previously auditioned phpBB which is, most likely, the most popular package available in this space.  A new product, Vanilla by Lussumo, is also a recommended candidate.  Another potential candidate is Beast, a Ruby based forum.

Development Tools

Developers work day in and day out within development environments and because of the percentage of time spent within these tools it is very important that these tools be readily available and appropriate for the task.

Hardware

All developers should receive moderately high performance workstations which are more than adequate for software writing.  Hardware is effectively free in comparison to developer time and effectiveness.  The IT team keeps a standard desktop build design for hardware and software designed to be efficient and powerful whilst giving developers the flexibility that they need to be effective.

Because CompanyX has a highly “at home” workforce and due to the fact that our desktops need to be transported regularly it is useful to keep our machines as light and small as is reasonable.  The recommendation is to use the Hewlett-Packard Small Form Factor dx5xxx family of 64bit AMD Athlon64 and AMD Phenom based desktops.  These machines are available natively with 64bit Windows Vista to support the latest Microsoft-based development environments and are completely compatible with our Novell OpenSUSE 64bit development environment.  In some special cases, Apple Mac OSX or Sun SunBlade Solaris desktop may be requested.

CompanyX should make a concerted effort to rapidly replace all aging 32bit systems still in the field with 64bit hardware and software as all deployment platforms have already migrated to 64bit exclusively.  Development will be made more simplified if only a single architecture standard is in use.

Many companies choose to use full-fledged multi-processor workstations rather than high end desktops, such as are recommended here.  Those are very valuable and should be made available upon request but most developers, we believe, are unlikely to wish to have large pieces of equipment such as these in their home offices and they are unwieldy to transport making them a less advantageous choice compared to the less expensive small form factor (SFF) desktops.

Desktops should be outfitted with very fast, multi-threaded processors and extensive memory allotments to support rapid development, responsive environments and easy, local compiling.  Care should be taken if any deviation from the IT department standard hardware is made to ensure that the hardware selected is able to support multiple monitors as many desktop products, including units in the same lines as we use, are unable to do so without the addition of extra hardware which adds expense, support problems, time to deliver and complexity.

In addition to dedicated hardware, we have traditionally offered a remote desktop option for users working from remote locations or who just want access to an internal CompanyX managed Linux desktop sitting inside of the corporate firewall already configured with a large set of tools for development.  To enhance future developer flexibility it is recommended that this system be upgraded to newer hardware and software and designed to handle many simultaneous remote users nearly equivalent to the entire development team.  This remote desktop service should be upgraded to mirror our most current Linux standard builds to provide consistency between the desktops and the remote access services.  Also, currently Windows developers needing remote access to Windows development environments do not have a reasonable option to accomplish this.  The proposed solution is to add additional datacenter functionality that will support direct, remote access to developer’s personal workstations through a central accessibility portal.

Software

For Windows development, CompanyX has been quick to adopt the latest in Microsoft VisualStudio platforms moving to each one as it has become available moving from VisualStudio 6 through VS.NET 2002, 2003, 2005 and currently 2008.  VisualStudio provides a complete set of Integrated Development Environment (IDE), compiler collection and database design tools.  For our standard development package we believe that this set of tools is more appropriate for our chosen C# / .NET platform.

For UNIX development, CompanyX has chosen to use OpenSUSE Linux on the desktop and Red Hat Enterprise Linux on the server providing a nearly seamless set of tools.  Linux is our operating system of choice for most development languages including Java and Ruby.  Unlike on the Windows platform there is not a single, obvious source for all development needs on UNIX.  We have chosen the Eclipse IDE as our corporate standard for Java and Ruby development.  Eclipse is available on both Linux and Windows providing additional opportunities for platform independence.

On occasion, C and C++ develop is needed.  For this purpose we have decided to provide two options.  The first option is C support in Eclipse.  This is useful for most C tasks.  The second option is the Linux-specific KDevelop IDE which is well suited to development targeting the Linux KDE desktop and the Qt Toolkit.

Continuous Sustainable Velocity

CompanyX has long valued the importance of providing an environment where developers are given the best possible opportunity to innovate, design, create and write great code.  We believe that introducing excess overhead, including most deadlines, does not lead to better code or better innovation.  We want to be sure to provide our development teams with the best possible tools and environment in which to create while doing as little as possible to hamper their desire to produce.  Taking a page from the book of academic procedures, we see all of our development as being a part of “research and development” and encourage any flexibility necessary to encourage this type of behavior.

Continuous Sustainable Velocity, or CSV, is basically the aforementioned ideal.  The idea behind CSV is to create an environment and management structure to support a maintainable maximum development velocity without exceeding fatigue thresholds.  Traditional projects use deadlines as a means of pushing teams to finish code more rapidly.  Often this works but at the expense of slowed development after the push and a higher rate of bugs and errors.  This type of management behavior sacrifices long term gains in return for short term benefits and overall produces less code and what code is produced is of lower quality.  This is similar to Wall Street firms forfeiting long term financial growth in return for short term stock price inflation.  CompanyX does not have to answer to short term gain seeking investors nor does its software delivery model create a need for hard development and release cycles.  We are in a unique and enviable position to exploit these advantages.

CSV does not eliminate the potential use of iterations, that is not the intent.  The idea behind CSV is to eliminate deadlines which are set by management, marketing, sales, customers, etc.  All products are created on a “when it’s ready, it’s ready” basis.  Management works with development to determine “release criteria” which determines when certain milestones have been met.  Using time estimates from developers as approximate completion dates to provide managers with the ability to create meaningful “drop dead lines” for burndown task lists to mark when an iteration, version or milestone is complete provides us with a scheduling mechanism, but one very different from what most companies use.  In this way some amount of scheduling can be achieved without inflicting deadlines upon the development team.  Instead, deadlines are inflicted upon management.

Because of the Software as a Service model in use within CompanyX there is no need for a formal product release cycle.  Instead software can be released at the end of every iteration.  In many cases these releases are very small and changes could go unnoticed by customers.  If an important feature or fix is ready before an iteration is complete it can be released to production immediately.

CSV is designed to not encourage overtime but to encourage flexible time.  The idea is a sustainable pace but with the understanding every day cannot be the same and that without variety there is no way obtain a sustainable pace.  Employees are encouraged to work long hours when they want and to not work when they do not want.   CompanyX’s goal is to allow each employee to work in the manner in which they feel most productive.  For some this means eight hours per day, five days per week during normal, daylight hours.  For others this might mean working a constantly changing schedule or working in huge bursts of creativity followed by days of relaxing and not coding.  We recognize that people are unique and that everyone works differently.

General Development Best Practices

There are many industry accepted best practices for all types of software development.  Among these practices are code review, source control, unit testing, test driven development (TDD), continuous integration (CI) and bug tracking.  These are the primary practices which we will be targeting as most important to CompanyX at this stage and which, we believe, can be implemented and enforced quickly.

Code Review

Code review is an important part of any organization’s attempt to improve code quality at a line by line level as well as a way to create organizational integration by disseminating knowledge, technique and style between team members.  By having different team members regularly reading each others code it gives a perfect opportunity for learning, instruction and communications.  This is especially true for younger and more junior developers reading through the code of their seniors and mentors.

Code reviews have the obvious benefit of error reduction.  The idea here is not that code review will reduce syntax and compiler errors as those will generally be caught by automated tools (except for Python or Ruby where new code paths may be tested in production and not in testing) but that having a second set of eyes on the code will allow for logic mistakes to be caught or give an opportunity for someone to suggest an alternate approach that may be cleaner or more efficient.  Perhaps even more importantly, if a developer knows that she is going to undergo a code review she is more likely to make the code easier to read which naturally will also make it easier to support later.

Source Control

Source Control is an absolute necessity for any development environment.  Even teams of just a single developer should be using source control as it is so critical in maintaining records of changes and providing an ability to go back to previous source versions.  The larger the team, however, the more critical the  role played by source control.  Versioning is important but the ability to merge changes from several developers and to manage branch and trunk development is very important.  CompanyX has chosen to use the very standard Subversion source control system.  Subversion is open source, free and very widely used in the industry.  It has become the de facto standard over the previously popular Concurrent Versions System (CVS) which it was designed to replace.

Unit Testing and Test Driven Development

Unit Testing and Test Driven Development often go together as a pair of practices.  Unit Testing, often handled through a unit testing framework such as nUnit for C#, JUnit for Java and Test::Unit for Ruby, is a style of code testing that is done by testing the smallest possible piece of the code infrastructure, often the method or function.  Unit tests are a very important part of any standard refactoring procedure.  By writing the original code and testing against unit tests we have, in theory, proven that the atomic unit of the code is acting correctly.  There is still a possibility of error in both the mainline code as well as the unit test but this is far less likely and there is still the very real possibility of architectural or design mistakes but a great number of errors can be caught and eliminated early through the use of unit tests.  Unit tests, at a very minimum, force the developer to write the same piece of code in two different ways (a “positive” in the form of mainline code and a “negative” in the form of the expected results) which can have the benefit of forcing the developer to think differently about the problem and may result, in some cases, in faster and more creative problem solving.

The nature of unit tests makes them highly automatable allowing you to use them regularly, such as with each code check in, to determine that new code changes do not break previously functioning code or to verify that refactorings of existing methods continue to work as expected.  Unit testing increases the level of confidence in the code and makes integration much simpler.

Test Driven Development uses the principles of unit testing to suggest that instead of writing a method and then writing a unit test to verify the method’s validity that you can instead write a test, which will initially fail, that looks for the expected output of the to-be-written method.  Then the method can be written against the test until the test validates as successful.  The key benefit generally touted by proponents of this approach is that writing more tests, which TDD causes by its very nature, causes software development to be more effective.  Bug levels do necessarily fall compared to test-behind unit testing but productivity has been measured to be higher when tests are written first.  It is believed that this is caused by the thought processes and mental approaches to problems taken by the developers when they are writing the tests.

For CompanyX, TDD may be, at least at this time, rather too extreme for us to begin to implement but the use of unit testing, across the board on both new and old projects is very important.  Once unit testing is implemented and accepted then experimenting with TDD can possibly follow.

Continuous Integration

Continuous integration builds on the benefits of unit testing and source control.  The concept behind continuous integration is that each time code is checked in to the source control system, which should be a minimum of daily, that a full, automated build process is run creating, in theory, a completely working build of the package and then run the complete unit testing suite against it to verify that integrity has been retained.  Should any test or the build itself fail, the package must be corrected immediately to ensure that there is a working codebase for work to continue the next day.

CompanyX has not yet utilized continuous integration in our practices but this practice should be rolled out in conjunction with comprehensive unit testing frameworks.  Continuous integration most definitely benefits from having unit tests already in place but we are able to begin before unit tests are actually done allowing momentum to be achieved.  Popular continuous integration packages include Cruise Control, Cruise and Bitten.  We are assuming the use of Bitten as it is integrated with Trac, which we will discuss below.

Bug Tracking

Bug and Feature tracking involves a centralized error management system that provides a centralized location for recording bug occurrences, status, priority, etc.  Bug reports may be produced internally by developers or testers or externally by clients.  By having a central location for bug data managers are provided an opportunity to perform triage on the bugs and can provide developers with a clearly prioritized “to do” list of bugs.  This is important in keeping projects moving at maximum velocity as well as for minimizing impact to clients if bugs are appearing in production systems.

Bug Tracking systems are important for any number of reasons arising from the need to know where bugs are, when they have been fixed, by whom and how.  CompanyX has, in the past, auditioned products like Bugzilla and FogBugz for this very purpose but was very disappointed in the ability of these products to meet our rather simplistic needs.  We have decided to use the Trac product from Edgewall Software.  Trac is a popular bug tracking system that has an integrated continuous integration environment feature, Bitten, that can be implemented alongside of the tracking product.  Trac is already designed to integrate with a Subversion repository making it a very useful choice for CompanyX.

Project Selection and Prioritization

As CompanyX is a small firm with extremely limited development resources it is very important that we choose projects carefully and prioritize them as best as possible in order to ensure that projects with the greatest potential for success are chosen over those that are less likely to have large returns, have a very high cost to implement or carry a high risk of failure.

CompanyX’s model of software as a service means that our attention is always drawn, first and foremost, to maintaining the quality and integrity of our running applications.  Unlike traditional software companies for whom software in the field seldom represents a financial loss if new bugs are found but may actually realize a financial gain through support contracts CompanyX has a vested interest in keeping running applications running as smoothly, efficiently and effectively as is possible.  Unpatched, known bugs are very evident to our customers and may represent potential security or performance flaws that will impact our own infrastructure and support costs.  Running applications always take full precedence over any forward development project.

New, greenfield projects are started infrequently within CompanyX and choosing an appropriate project is an important decision that affects the company, as a whole, for any single project.  For a small firm like CompanyX which is profitable and maintains a relatively even level of developer and engineer resources versus workload in the pipeline possibly the most important selection criteria for a new project is availability of resources.

In most cases, at CompanyX, developer and engineering resources are not interchangeable commodities.  Developers and especially analysts are likely to be domain experts in a business area and projects are often selected based on the availability of these skill sets.  If new developer or analytical resources become available suddenly a new project may be initiated with very little planning in order to utilize these resources as quickly as possible.  In this way we are very opportunistic in our project selection process.  Cost / Benefit and other factors are generally subsidiary to resource availability.  This is not only because of the importance of resource availability to us but also because resource availability is an easy to determine factor while cost, benefit, MOV, ROI, etc. are extremely subjective and prone to very high rates of miscalculation and misjudgment.  Given the availability of a straightforward deciding factor, we believe that we will obtain best results by leaning heaving to the weighting of the known rather than the unknown.

CompanyX is very small and project prioritization is not a significant factor.  Projects are generally started when resources become available.  There are, of course, a number of projects that are internal or infrastructure related and can utilize any number of different resources and so these do require prioritization, but, by and large, project prioritization is handled naturally but the sudden availability of a market opportunity or availability of resources which are a good match for a particular project.

Prioritization of these projects is handled by executive management and is usually determined by expected ROI.  Internal projects tend to be of a moderately similar size and complexity so valuations are relatively straightforward and manpower reduction and competitive advantage are the primary ROI determinations.

Team and Project Organization

CompanyX, being a small firm, needs very little and relatively informal team and project organization.  Each organizational unit, being relatively autonomous, generally has a single development team assigned to it.  This segregation is intentioned to create an environment where each team feels empowered to self-organize as needed and as is appropriate for the nature of their project.  This lack of coordination also provides a more nature freedom for teams to choose architectures and platforms that most fit their needs rather than blindly following trends and directions forged by older teams and successful projects.

Steve Yegge of Google has stated that Google’s availability of great platforms can be a real boon for internal teams looking for resources but that it also squashes innovation by pressuring teams to implement large initial overhead systems as well as square-pegging technologies to work on Google’s existing platforms (i.e. BigTable, etc.) whether or not they are appropriate or advantageous just to justify the former investment in the technology.  This justification is clearly one that exists only on paper because if a platform must be used even when it hurts the project then it has actually introduced a negative ROI rather than a positive one.

Because CompanyX does not size teams based on a need to meet a schedule or deadline teams are chosen based on team member interaction, appropriateness for the team, technical needs, etc.  This team sizing method is designed around obtaining the maximum performance from each developer rather than picking an arbitrary speed for development and sizing a team until it is able to obtain that speed.  This sometimes costs us the ability to get to market as rapidly as possibly in favor of overall per-person productivity.  This impact is not as dramatic as it may seem, however, as innovation and high-performance teams are able to move quickly and keep pace with much larger teams that are built without regard for their team dynamic, communications, style and compatibility.

Typically within CompanyX teams range from two to five people.  We have found that smaller, more closely knit teams tend to obtain greater productivity per developer and that ideally teams should be in the two to three person range when possible.

Teams are only-lightly managed with self-organization as a high priority.  Obviously some management and direction is needed from outside of purely self-organization.  Direction is generally established by the business analyst assigned to each business area.  The analyst acts much like a manager as they are involved in deciding strategy, task priority, direction, etc.  The business analyst is not a personnel manager and no staff report to them.

Management is handled directly from the Director of Information Services.  This is the only dedicated management role in the CompanyX IT organization.  All IT staff report up to the Dir. of IS.  CompanyX is very fortunate to have the ability to avoid the bulk of politics that occur in most organizations simply because of our size.

Virtual staff is import at CompanyX as it is in almost any organization with more than a single team.  Because CompanyX is small, though, virtual staff is still a relatively minor consideration.  Technology expertise may be shared across teams but this tends to be quite casual.  The CompanyX wiki includes information including skills sets and areas of interest of employees so searching for skill areas is possible.

At CompanyX, it is the goal of our managers to make it possible for employees to work.  The idea is not that we have hired employees who need to be managed but simply that they need direction from time to time and, more often, need someone who buffers them from the non-development needs of the organization.  It is the role of the business analyst and manager, in this case the Director of IS, to eliminate distractions, unnecessary paperwork or other busywork and to provide a politics-free environment as conducive to working as possible.  CompanyX’s policy is that the manager is less in control of the team as a resource of the team.  We believe that if we hire the right staff and provide them the ability to do great work that they will so it is very important to keep out of their way and not stop them from being productive.

Project Management in the Continuous Sustainable Velocity Organization

In a traditional software development organization, project management is a rather heavyweight process which is heavily involved in the day to day work of the development teams as well as involved in the production of a large number of project artifacts stemming, generally, from a need to report project progress, status, details, etc. to upper management.  CompanyX does not operate with a need to produce project artifacts for upper management.

CompanyX management has the ability, and the responsibility, to pull project status by checking the project backlog and burndown lists.  In this way status is able to be monitored without needing the team to provide details back to management.  The monitoring is passive, not active.  At no point should the team stop doing actual, profitable work in order to produce paperwork for internal consumption.  Not only does this take time away from the development team and slow progress but it also would require management to spend their time reading status reports rather than being productive themselves.

Because we do not market products before they release and do not have deadlines to meet in delivering products to our customers the needs of our organization are much simplified over more traditional software firms.  This gives us an additional advantage in that we can spend more time innovating and producing software at lower cost making it easy for us to outpace our competitors to market.

Some pieces of the traditional project manager role are played by the business analyst.  Important decisions such as when a feature is complete or what level of functionality is acceptable are determined by the business analyst.

The PMBOK specifies many tasks which are a part of the project manager’s role but a large majority of these are simply assumed by the creators of the PMBOK and are not actually necessary.  At CompanyX, our model is continuous development so we do not “time box” development work into a project that has a definite end date.  We cannot budget by release because we do not know when a release will be completed or what it will contain.

Instead of traditional budgetary means, CompanyX establishes an ongoing development budget that is defined in a cost/time manner, such as $65,000/month.  This cost includes the salaries of the project team for that duration, an estimate of the months cost of electric, HVAC, Internet access, software, hardware, networking, telephones, chairs, desks, etc.  Using this number, which is extremely accurate and varies in predictable ways, project budgetary planning can be carried out based upon the needs for the project to continuously outperform its cost by way of its revenue.

Instead of determining a project’s total cost up front we determine the ongoing cost of a project and determine if we are prepared to support the ongoing cost of development and support.  This still requires budgeting and financial planning but the manner in which it is done is far more straightforward and intuitive.  It is also far more honest than planning done in many organizations as the ongoing support cost is both difficult to fudge (if resources are assigned, cost is assigned) and total project scope is not a part of the picture. Scope creep, which is controlled by the business analyst, will affect development duration but will not impact the budgetary process.

There is no cause to make early determination as to scope before the project has begun.  In order to determine a project budget a normal firm will need to decide upon scope, features, architecture, etc. before the project budget has been approved and, therefore, before the project itself has begun.  CompanyX avoids the costly and error-prone process of making scope decisions when far too little is known about the project and, instead, determine a velocity desired and/or budgetary need that can be withstood and allow the business analyst along with the development team to determine feature priority and to decide when the product has enough features completed to allow it to move from purely development into production.

Hiring Practices

One of the, if not the, most important endeavor undertaken by the staff at CompanyX, or arguably any company, is the accumulation of staff.   Because CompanyX focuses less on processes, procedures and documentation and more on team effectiveness, innovation and employee satisfaction we are far more affected by the quality of the staff that we hire.  This greater employee dynamic can be seen as a benefit or a deficit.  If we hire great staff then we empower them to be creative and productive. If we hire poor staff we empower them to do nothing.

CompanyX has always been and must remain very aggressive in its stance on hiring and retaining the best IT talent in all IT areas including engineering, development, administration, architecture, support, etc.  We have found that by the very nature of our management practices and our use of continuous sustainable velocity that we have naturally managed to maintain staff once they have been acquired.  Most ambitious and talented IT professionals, especially developers and designers, are thrilled to find an environment so dedicated to respecting their professionalism, providing a framework for their own learning, growth and production, peers that are passionate about their field and the projects on which they work and a lack of red tape separating them from the work that they wish to do.

Far more difficult than retaining great talent is locating and identifying that talent in the first place.  At CompanyX it is very important that we find not only great skill and experience but also great drive and passion.  CompanyX is a company focused on innovation and to achieve this we require staff that will constantly seek out and discover new avenues for exploration.

Our first means of seeking out great candidates is to simply not seek candidates.  CompanyX has always hired exclusively through people seeking out an opportunity to work with CompanyX and through word of mouth.  This drastically reduces the chances of hiring someone who is desperate for work rather than seeking a wonderful development environment.  We do not want staff that are simply happy to have a job which will pay the bills but want staff who are actually excited about the work that they get to do and would be very unlikely to leave the company because almost no other company offers the exciting, driven environment that CompanyX offers.

When in the hiring and interviewing process for new candidates it is very important that we do not stress specific technical skills.  Any candidate may have a select number of highly technical skills for which we test which proves nothing.  Technical testing also requires us to test a candidate against a skill set for which we assume knowledge will be needed for the job.  This is a short-sighted methodology based around an immediate need and not the long term growth of the company.  No technology is so complex that we must require that a candidate bring that existing knowledge with her.

Far more important than a candidate possessing an immediate technical skill is that the candidate has a set of technical skills – any set.  What CompanyX actually cares about is the ability for a candidate to grow and adapt.  The technical landscape is constantly changing and the skills useful today will not be useful tomorrow.  What we are looking for is a candidate who will embrace tomorrow’s skills and learn them easily.  Otherwise an investment in an employee is an investment in stagnation.  If a candidate earns a job because of a single skill set then that candidate is also more likely to leave later when we move forward to look for “comfortable” work at a firm using legacy technologies.

Even within a single technology for which we would choose to interview, Java for example, there are a range of specific technical skills necessary to be useful on day one and expecting any particular candidate to use the exact same set of technologies and approaches which we are currently using would limit the potential candidate pool dramatically while likely eliminating candidates who could pick up the skill in a weekend and outperform most candidates in a week or two.

For example, within Java candidates may be used to platforms such as Weblogic, Websphere, JRun, JBoss, Spring, Tomcat or WebObjects.  They may work with different databases like Sybase, SQL Server, MySQL, PostgreSQL, Firebird, SQLite, BDB, Oracle, Informix, DB2, Derby, etc.  ORM packages like Hibernate, iBatis and others.  They may use EJB 2, EJB 3 or POJO approaches.  They may run on Windows, Mac OSX or UNIX.  Perhaps they use Swing to make desktop applications or only write component services.  The variation is far too wide to expect that the right fit for CompanyX has been working with the same technology stack as us.  If we do find a candidate working with the exact same stack then we risk that that candidate won’t be bringing in fresh approaches and ideas which is some of the greatest value available from new candidates.

Along with technical prowess we also are very concerned with finding candidates who are extremely passionate about technology and are driven not just from a career perspective but also from a desire to excel, to create, to grow and learn and to make, manage or design amazing software products.  A candidate with drive but little experience is more valuable than one with much experience but that is not concerned with future personal growth.  Working at CompanyX is a lifestyle of technical excellence.

Along with the standard hiring process we must also consider the opportunities that lay in an intern or co-op program.  Interns may come from any background including current technical IT work, academia, high school, etc.  In an intern we look primarily for drive.  Drive is most recognizable in intern candidates of any age who are willing to work on their own projects, at home, in their own time and have been doing so long before becoming interested in working with CompanyX.  This same drive is sought after in any candidate, not just one at an intern level, but unlike more senior candidates who have an opportunity to differentiate themselves through career experience, skill set areas and previous projects, interns are viewed almost exclusively through their “at home” experiences.

Recognizing two important facts about academic backgrounds is also important in picking candidates.  The first is that many of the best candidates will not have attended college or university because they were too busy already being in the field.  The second is that if we desire to add academic experience to a candidate we can easily do that after they are hired; there is no reason to require this before the hire process begins.  A tertiary issue also exists that the majority of collegiate IT and CS programs are both far too easy and often avoid core topics that would be expected, and in so doing lessen the potential value in judging a candidate by their academic background even when it appears to be pertinent.  Because of these two facts it is CompanyX’s policy to disregard academic experience except in cases where the candidate has undertaken the process of obtaining a degree after having entered and while remaining active in the field.  Using university studies as an excuse for absenteeism from the field is not counted as positive experience time but as a resume gap.

Outsourcing is an important topic for companies like CompanyX.  Because we have no centralized, physical location we are well suited to both outsourcing and offshoring.  We will not, as a company, consider outsourcing labor to another software firm because we both believe that other companies are highly unlikely to be as competitive as we are leading to only higher costs and lower output and also because our value is in creating software and managing IT solutions ourselves.  It is these key tenants on which CompanyX builds its reputation and creates its value.

Using labor external to the United States, however, as employees is another matter.  CompanyX is officially nationality and locale agnostic.  We have no reason not to hire from international locations.  Even timezone issues are of little importance as our current staff regularly works floating hours and often works weekends or at night.

Quality Assurance and Testing

At this time, CompanyX does not utilize a dedicate QA and/or testing team for our software.  All of our testing is done by the developers and analysts and is done using informal methods.  This procedure is far from being ideal and leaves a lot of room for critical improvement.  This is an area in which CompanyX should focus heavily in developing a team and process for standardized testing.

The first step in preparing for formal testing is the creation of formal, separate testing and production environments.  At this time CompanyX has development environments running on desktops throughout the company and all testing is done directly at that level.  We are currently in the process of building out a dedicated testing datacenter which will be used to run pre-production code in a live setting for serious integration and testing work.  We currently have a Solaris testing environment built and  a Windows 2008 environment is already in process.

After we have a dedicated testing environment, our next step is to begin to build a dedicated testing team.  We will need professional testers with testing experience and knowledge who can help us to build a testing practice in the same manner in which we have built a development practice.  Implementing both white box and black box testing, testing automation beyond unit testing, interface testing, etc.

We have been fortunate that our small, tightly integrated user base thus far has been resilient to bugs as almost all of them clamor to be involved in early testing but this will change.  Our development methodology tends to introduce a low number of bugs, but we should be striving to lower that number as far as possible.  As our products are used by a larger and larger set of clients this will become more and more important.

Developer Mentoring

Mentoring is an important element in any professional development environment.  Mentoring adds organizational value for junior staff members as it gives them an opportunity to learn and grow with direct access to senior staff members with real world experience and value unobtainable through other avenues.
Mentoring is also important because it provides an opportunity for knowledge to flow back through the organization.  There are few mechanisms through which experience both personal and project or technical get disseminated, in a meaningful way, through an organization and one of the most powerful means of doing so is through mentoring.

The process of mentoring also helps to build stronger inter-personal ties within the company.  People on different teams or in different technical areas can use mentoring relationships to build networking ties and to become more acquainted with the organization and its resources.  Mentoring is significant in cementing corporate culture when new staff come on board.

Project Lifecycle Issues under Continuous Sustainable Velocity and Software as a Service Models

In a CSV and SaaS organization we face particular challenges less common in more traditional software companies in determining exactly what constitutes a project lifecycle and how that relates to its team and team members.  When software is delivered as a service it tends to take on a wholly different type of life than does software that is boxed and shipped.

Traditional box and ship software, or other types of software typically delivered in a “versioned” way, has a natural project lifecycle associated with each new version of the software.  Often teams will be known by their version such as the Word 2003 and Word 2007 teams at Microsoft.  This allows for projects to have a definite initiation point and a clear, or mostly clear, finishing point.  Some amount of ongoing support and patching generally continues but is minor in contrast to the project itself.

When working with Software as a Service and especially when doing so without the benefit of regular iterations there is no clear division between software development and its ongoing support.  For example, ProductY, our flagship product, ran as a project for approximately eight years.  During this time it went through initial development, ongoing support, new features, performance enhancements, stability improvements, underlying platform migration and more.

After eight years the ProductY codebase was replaced by the ProductY 2 codebase.  This new codebase we expect to go through a similar lifecycle but lasting approximately twelve years.  We anticipate the retirement of this codebase by 2020.  In this case, the team responsible for both codebases is the same.  Not one person changed between the original ProductY and ProductY 2.  From the perspective of the team, ProductY 2 is simply a natural continuation of the original ProductY and part of the lifecycle involved the migration of clients from one system to the other.

When applications go through lifecycles of this nature and on a timescale of this magnitude it is exceptionally difficult to differentiate between different phases of the project.  Is initial development until beta one project and then ongoing support another?  Sometimes though there is far more code developed after beta than before.  Are the two ProductYs separate projects?  Most likely that is true.  That is the most appropriate project divider that we have identified.  Still this is a very long time for the concept of a project to survive.

Because of these issues, CompanyX is faced with almost not even having the ability to identify teams by project at all.  This is difficult as the thought processes of developers, and humans in general, invariably brings them to the project.  At this time we have not determined the correct answer to this problem but we do have an approach that is, thus far, working while still being a work in progress.

Instead of organizing by project, we have been organizing our teams by application or feature that is a long-term, externally identifiable entity.  Therefore, to us, ProductY and ProductY 2 are the same entity as they are a single product to our customers.  A separate hazardous waste product, while similar and sharing the business unit, would be a separate identifiable entity and would be a separate “project” internally to CompanyX.  This approach, at this time, continues to function but also continues to evolve.

The overall lifecycle of one of these “projects” is very fluid.  Project initiation is very similar to a traditional project approach used by any software firm.  Initial requirements gathering and basic design is done and then code development commences.  Once a potentially useful level of development is deemed to be complete by the business analyst and testing is complete the application is released into production.

Once an application is in production there is no guarantee that it will be used by customers.  At this point we generate a basic sales and marketing framework such as a web site, support contacts, etc.  The appropriate marketing manager determines if the application is marketable in its current state.  If so then marketing begins to attempt to garner interest in the product.  If not, application development continues as before attempting to complete more features and increase functionality.  The development process is not driven by marketing except in rare circumstance nor is marketing driven by development.  Customers seeking our products always have a means to find them even when marketing does not feel that the product is feature-complete enough to justify marketing expenditures but new customers are unlikely to be located until marketing sees the product as more mature.

Regardless of the involvement of the marketing team, development continues on in the same manner.  The purpose of the initial lean release is not necessarily to get early adopters but to give customers who are anxious to use our products every opportunity to do so, to get earlier real-world feedback and to put the stake in the ground as to the launch of the codebase.

Throughout the lifecycle of the code development continues.  In some cases development may slow as the product ages and becomes more mature.  There is definitely a need for more development while ramping up a product than in ongoing support.  However, unlike 37 Signals who run a similar model, we are not targeting lean software as our primary market but extremely powerful and robust software with powerful business modeling potential and this gives us more opportunity to mold our products long after their initial release into the market.

In addition to feature growth, old code bases will often see refactoring, bug fixes, performance enhancements, library or platform upgrades, interface updates, etc. which supply a large amount of continuing development work long after initial design and release pieces have been completed.  At some point, as we saw with the ProductY product line, a new codebase will likely be generated and at that point development begins anew with a blend of fresh approaches and new designs along with new technologies while drawing from former knowledge and experience.

In the case of the ProductY, the initial system was a web-based Microsoft DNA architecture application running on VBScript/ASP in the Windows NT4 and SQL Server 7 era.  It relied upon Internet Explorer 4 as its client side platform and used a small VB6 client component to handle hardware integration.  ProductY 2, while providing identical functionality to the customer, does so using C#/ASP.NET on Windows 2003/2008 and MySQL with a single, non-web based client side C# application that handles user interface and hardware integration concerns along with local caching with a client side database.  The two architectures and approaches are very different but the functionality is the same.

Packaging and Deployment

CompanyX has a fairly unique need for packaging and deployment of its software.  As all of CompanyX’s software is deployed internally to our own platforms and our own systems we have an advantage over most firms in that we have little need for broad platform compatibility testing nor do we necessarily have a need to package our applications for easy deployment.

It is tempting, given the simplicity with which we can have the development teams perform their own deployments through a manual process, to completely forego the entire process of performing application packaging.  This makes sense in many ways.  It does, however, present a few problems.

The first problem that a lack of packaging creates is an over-dependence upon the development teams.  The developers themselves should not be required to be directly involved in the release of a new package, at least not with the physical deployment of the package.  By having a package created it allows the system administrators to perform application installations easily and uniformly.  This is not necessarily advantageous on a day to day basis but when there is an emergency and a server must be rebuilt quickly having the applications ready, in different versions, for the admins to apply to the environment without developer intervention is very important.

The second problem that arises without a packaging process is that software tends to become extremely dependent upon a unique build and deployment environment.  Shares to the environment from a system administration and engineering perspective may inadvertently break packages.

In a Red Hat Linux environment, for example, a customer deployed package may have a library dependency unknown to the system administrator.  That library may be updated, moved or removed, breaking the software quite accidentally.  If that same software were to be properly packaged as an RPM (Red Hat Package Manager) package then the RPM system itself would check dependencies system-wide and block the removal or changes to other packages needed by the application software.  When installing the application software a tool such as YUM (YellowDog Update Manager) would also be able to satisfy dependencies automatically rather than sending the system administrator off to find packages supplying necessary libraries.

For a company like CompanyX, our packaging needs are very simple.  Packaging formats such as RPM for Linux (Red Hat and OpenSUSE) and DStream for Solaris (aka Solaris Packages) provide important failsafe mechanisms to make installing fast and simple while protecting the software from inadvertent damage.  They also make version control and system management faster and easier.  More advanced packing, such as graphical installers, would be overkill in this environment and would, in actuality, make the deployment process more cumbersome considering that all potential deployers of the software are skilled IT professionals and not end-users and customers.

Management Reporting

Earlier I spoke about the availability of project backlogs and burndown lists as important tools for management to keep an eye on the status of projects without needing to interfere with the project team itself or to send agents to collect data on the team.  In many cases this is not enough information for management or for the rest of the company.

Currently CompanyX works on the basis of the basic backlog and burndown lists along with word of mouth, wiki posts, email distribution lists and a bidaily status and design call in the late evenings.  These tools, however, are not scalable and do not provide adequate reporting to parties outside of the direct management chain.  More information is needed but it is critically important to keep management from becoming directly involved and interfering with the teams and the projects.

The first status report which should be created for management is the build report from the continuous integration system, mentioned above.  Management should get a report, generated every time that the system is run, that details the status of tests passing and failing.  This allows detailed progress information to be noted without requiring any additional input from the team itself.  This is a good utilization of best practice processes that will already be being put into place.

The second important reporting tool, important for more than just management but for the entire organization, is developer, analyst and team blogs.  Having a project/team blog that is updated by the business analyst and/or development lead at least once a week gives insight into the projects for other teams, interested parties and, of course, management.  By providing all employees with their own internal blogs there is a system for personal status reporting and organizational information gathering.  Tools that may work well here include SharePoint, Alfresco, WordPress and Drupal.  Anyone in the organization would have the ability to subscribe to an RSS feed or to visit a website to find the latest information from any person or project.

In additional to traditional blogs which are great for technical posts and project status updates, the more recent concept of the microblog, made popular by Twitter, may also be beneficial to the organization.  Microblog posts are generally limited to 144 characters per post, just enough for a sentence or two.  This format could be useful to allow developers to post their thoughts, locations, status, needs, etc. throughout the day.  CompanyX has even written our own client for making these types of post easily from the command line while working in UNIX systems which is extremely handy for developers to effortlessly make quick statements to the company.  For internal use a microblogging platform server such as Laconi.ca would be appropriate, not a public service like Twitter or Identi.ca.

As an example of how microblogging could impact development, a developer might post “Stumped on a C# RegExp.”  Someone with extensive C# regular expression experience might see the post and instant message that developer offering some assistance.

Internal Training and the Development of Practice

The idea of Practice within the IT organization is a critical one.  By Practice we are referring to the idea of approaching a discipline as a true professional, perhaps even more appropriately, as an amateur in the classical sense.  The idea of Practice is, unfortunately, often a nebulous one, but one that is key to  the CompanyX approach to software development as a whole.

Practice refers to the holistic approach to the technical discipline whether that discipline is Java programming, Linux system administration, Windows desktop management, project management, business analysis or software architecture.  This means that we must look at the individual technical area as an area in which we can learn or develop techniques, strategies, best practices, etc.

Instead of approaching each technical area as just another tool in the tool belt, we look to encourage serious study of the technical areas which may include reading books, studying in a collegiate setting, joining user groups, attending vendor seminars and events, becoming active in online communities, writing and publishing papers, independent research and more.

Internally we support general educational opportunities through support of collegiate academic programs, unlimited book budgets and the Borders Corporate Discount program, internal training, mentoring programs and organized trips to vendor events, seminars and support groups.   Most importantly, we believe, it is important that we promote a culture of education and not merely offer opportunities but make learning an important part of everyday life within the organization.  We may also want to pursue projects such as having a reading group which meets to discuss readings in important industry publications or discusses seminar talks and panel discussions.

Software Development Best Practices and Standards

Even in an organization such as ours it is important to retain practices and standards throughout the organization.  Standards apply in many areas.  Applying standards, when possible, is an important factor in making code readable and rapidly adaptable when working with multiple developers.

Two areas in which coding standards need to be applied, across the board, are in naming conventions and code style standards. Both of these areas fall outside of technical areas of expertise and contribute to communications.

For a firm the size of CompanyX, developing internal style and convention guides would be extraneous overhead and unlikely beneficial to the environment.  Instead, we have decided to standardize upon the widely accepted Cambridge Style standards.  These are well known standards used not only in business but often in academia and widely used in professional publications.

Using these standards assures us that potential candidates are, at the very least, familiar with the standards even if they have not used them directly, will recognize the patterns easily and will write in a common style for the industry when publishing as well.  By using such common styles CompanyX can leverage the benefits of style and standard without unnecessary overhead and expense.

Style standards apply both to the “beautification” of code including things like the appropriate use of white space, placement of braces, use of multiple elements within a single line, etc. but also to naming conventions for variables, methods, classes, objects, etc.  Through the proper and standard use of such styles and standards programs become more readable, portable, extendable and maintainable.  The ultimate purpose is, we believe, to support the concept of Literate Programming as best stated by the legendary Donald Knuth:

“I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature. Hence, my title: ‘Literate Programming.’

Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style. Such an author, with thesaurus in hand, chooses the names of variables carefully and explains what each variable means. He or she strives for a program that is comprehensible because its concepts have been introduced in an order that is best for human understanding, using a mixture of formal and informal methods that reinforce each other.”

Process Management Standards

Many software firms use process management standards such as the Capability Maturity Model Integration and ISO standards to control, improve or monitor processes.  No formal process improvement program or strategy is currently in use within CompanyX.

Capability Maturity Model Integration

The Software Engineering Institute at Carnegie Mellon developed the Capability Maturity Model Integration system with the idea that this model would be utilized by organizations seeking to build a framework of continuous process improvement.  The CMMI is a set of guidelines as to what processes should be implemented but not how those processes should be implemented leaving the actual implementation up to the individual organization.  This is good in that it allows for a great deal of flexibility when it comes to the actual usage of this system with a unique company.

The ideas behind CMMI are almost exclusively targeted at very large, non-IT organizations.  These ideas are not necessarily bad but assume a large amount of process overhead which is not present in many companies.  Introducing process where it is not needed just for the sake of managing and improving said process is unlikely to be an effective use of available resources.  Much of CompanyX’s management approach has traditionally been focused around reducing management and process overhead in order to make the company more efficient, agile and responsive while reducing costs.  The introduction of a process such as CMMI could easily double operational expenditures even without pursing official certification.  It is also likely to reduce agility and increase ongoing development cost by reducing development team efficiency.

Lean

Lean Software Development is an application of the principles of Lean Manufacturing to the software development process.  Lean is simply an attempt to eliminate waste from the development (or manufacturing) process and is directly related to concepts such as Just in Time Manufacturing – the process of having parts arrive at the factory just as they are needed for assembly reducing storage cost and leverage the time-value of money.

The concepts of Lean echo (or predate, in fact) concepts widely accepted in software development such as YAGNI (You Aren’t Gonna Need It) which encourage developers to save features that they cannot guarantee will be needed until they can.  This often reduces the total amount of effort needed to make a product since useless codepaths are less likely to be constructed.  Extra code not only costs at the time of creation but it also costs later to maintain and generally reducing system performance.

While Lean is often associated with process models and systems such as the CMMI and Six Sigma, Lean Software Development is more appropriately an Agile development methodology and is organized as such by its designers, Mary and Tom Poppendieck.  Lean is a relative newcomer to the methodology space appearing as recently as 2003 and appears to be mostly a repackaging of most core Agile principles without the addition of many of the “scarier” practices such as pair programming in XP.

For any organization utilizing an agile process, Lean seems to be less of a firm methodology and more of a repackaging of obvious and common sense items.  Lean’s greatest advantage, most likely, is for organizations who are caught in the “manufacturing” mentality to pick up a truly Agile framework and see it as “emulating Toyota”.  Lean may have the ability to be sold to managers who otherwise won’t accept processes without management buzz words attached.

Six Sigma

Six Sigma is a manufacturing process improvement program initially development by Motorola in 1986.  Six Sigma and Motorola processes are exceptionally well known to businesses in Western New York as many of them, including then Motorola CEO George Fisher, were brought to Eastman Kodak.  These manufacturing-designed processes and management when applied to Kodak, a research and development organization were used to drive out innovation, cripple agility, alienate the workforce and effectively dismantle the company.

Six Sigma itself is a manufacturing improvement process that focuses upon a feedback loop of quality improvement following on the tradition of many standard quality improvement procedures.  In fact, Six Sigma has been criticized for being nothing more than marketing fluff applied to obvious principles long established in the industry.  Six Sigma is often associated with other initiatives that have been labeled as management fads such as Total Quality Management and Reengineering.  These processes and their ilk were lambasted by Scott Adams in his landmark business book The Dilbert Principle.  Many of these fads and processes have come to be the face of poor management today even though mostly they have faded towards obscurity.  Six Sigma remains more strongly and may not constitute a fad since it is simply a repackaging of traditional concepts.

The biggest problem with Six Sigma is that it is a manufacturing methodology that relies upon repeatable processes and measurable error rates to achieve its goals.  As software development has neither of these things Six Sigma is meaningless in this space.  Even Eastman Kodak, a company whose fortunes lay mostly in their research and development, saw Six Sigma and heavy manufacturing processes erode their competitive advantage when their organizational focus was shifted to low-profitability and easily off-shore-able plastics manufacturing rather than pharmaceutical and technology research.  Even Motorola, the poster child of Six Sigma, saw its fortunes dwindle compared to its chip making competitors and the market as a whole once their focus shifted from new product development into manufacturing.  In first world markets, any process that can be measured and improved through a system such as Six Sigma should be earmarked for third world consideration.  In those markets, the principles behind much of Six Sigma should probably be applied to their manufacturing processes as there is much value when the core business is actually manufacturing and not product development and innovation.

The mistake often made with Six Sigma outside of manufacturing is the belief that all processes, even those that are inherently creative, can be meaningfully measured and quantified.  Few people would think of applying the principles of Six Sigma to painting a picture, composing a symphony, writing a novel, acting in a play, dancing in the ballet, etc.  Why?  What about an individuals expression through dance is less quantifiable than the creation of software?  Software, according to Donald Knuth, is literature and, according to Paul Graham, the processes making it are closer to art than engineering and even engineers would be appalled to find their engineering processes being measured and quantified as if they were factory workers – interchangeable cogs in a giant machine.  If companies believe that design processes are interchangeable and replicable why are they not automating them.  Clearly the end products of mechanical engineering and software development are both delivered as computer data files and so would be perfect candidates for total computer automation.  Why has this not happened –  Because both of these processes’ values are in their creativeness and not in their repeatability.  Therefore Six Sigma has no meaning in this context.

Lean Six Sigma

In theory, Lean Six Sigma is the application of Lean’s Agile development principles to a Six Sigma process improvement environment.  In manufacturing circles this is an obvious marriage.  In software circles, however, Lean does not in any way explain how Six Sigma can be applied to the creative process of software design and writing.  Even Mary and Tom Poppendieck, the creators of Lean Software Development, when speaking about Lean Six Sigma, are only able to explain the application of Lean in a software environment and neglect to explain the use of Six Sigma.

Summary

There are two things that often result from the application of process controls in an organization.  The bad is the creation of unnecessarily heavyweight or even possibly harmful process that encumber or possibly disrupt productivity and increase overhead costs making it more difficult to achieve profitability.  The good is the codification of common sense principles.  What is difficult is to avoid the first while leveraging the second without letting the second become the first.

Few process management advantages come from applying unintuitive principles to software development.  Sometimes principles which are obvious after you are aware of them, such as YAGNI, are non-obvious or even counter-intuitive to novice developers, managers and teams.  In cases where the organization is not aware of these principles at an organizational level and needs to apply them down to teams it is generally effective to codify them.  However, we do not believe that it is valuable to codify principles which are already understood and obvious.

Every organization has a different level of organization knowledge in software development.  The larger the company the great need for process control and codification.  Smaller companies have more opportunity to transmit good processes through the idea of Practice learning without the need for formal codification.
By transmitting through Practice we have an opportunity to teach, learn and grow rather than by enforcing.  Transmitted knowledge gives everyone an opportunity to evaluate, internalize and comprehend procedures and, most importantly, it gives everyone the right to question them.  If a process is codified within the organization then only a select few people are likely to have any real ability to question the applicability of a process or procedure and innovation may be stifled.  Few processes are appropriate all of the time and so teaching everyone when they are useful and when they are cumbersome empowers everyone to be as effective as possible.  Codification takes that decision out of the hands of the people with the most direct ability to determine the appropriateness of a procedure within a particular context.

Organizations blessed with highly skilled staff need very little process control as the staff will generally regulate themselves and follow meaningful processes on their own.  Some process control is still necessary, in this there is no question.  A minimally skilled organization, on the other hand, will require a great deal of process control to allow them to obtain any degree of productivity at all.  This is the fundamental theory behind a great many business and management processes – the concept of “managing to the middle”.  Managing to the middle means that instead of acquiring and training and empowering great staff the company has decided to acquire budget resources and using heavy process control to obtain some level of production from staff that would be mostly dysfunctional with those processes.

At CompanyX our base principles rely on the fact that we acquire great, motivated resources and empower them for innovation and creativity.  This has been said many times in this text and it cannot be stated enough.  Every management decision made at CompanyX must be done in the light of the fact that we trust our teams, and we believe in our teams.  CompanyX is not a code manufactory.  CompanyX creates literature.  It creates art.  It creates innovation.   Therefore it is imperative that we refrain from utilizing process control beyond that which is absolutely necessary and, whenever possible, rely upon the foundations of Practice and professionalism to establish working processes that are themselves flexible and mutable.

Risk Management

In software development, risk is a constant presence that must be addressed.  Risk comes in many forms such as choosing to enter an inappropriate marketplace, finding a new competitor after a project has begun, poor design, misidentification of business needs, bug-ridden software implementations, failed vendor support for key platform components, regulatory changes, unforeseen technical difficulties, team malfunctions, key employee health issues, lack of internal political support, etc.  Risk is unavoidable and it is by facing risks that we are presented with an opportunity to move into new, unproven territories.  With great risk comes great rewards.  But nothing is without risk, even an absence of action does not truly mitigate risk, in fact it is likely to exacerbate it.

“If you don’t risk anything, you risk even more.” – Erica Jong

Risk Identification

The first step in dealing with risk is identifying potential points of risk.  For CompanyX we know that our most significant points of risk come from personnel dynamics which include health issues, availability, etc. and post-production market uptake.  There are many other risk areas but to these two we are more susceptible than many companies and need to be acutely aware of the implications of these risks.

Before any project begins a detailed risk assessment process should be followed.  This procedure should include development lead, design lead and financial management at a minimum.   Our organization fortunately is able to avoid some areas of often high potential risk because of our minimal political turmoil, abundance of project support and lack of risk aversion.  For example, CompanyX has never been forced to abandon a project because of a withdrawal of internal executive sponsorship which may plague even the most successful of projects in other companies.

Risk Assessment

Once a comprehensive set of foreseeable and reasonable risks are collected then we must assign a likelihood of occurrence and a level of impact quantity to them.  Obviously this measurement is highly subjective.

Unlike a larger firm which may be able to spread risk out widely between projects and hedge high risk projects with low risk projects we are stuck in a position in which we must put our few eggs into very few proverbial baskets and our organizational risk to a single failed project is quite high.

Risk Mitigation and Handling

Once risks are understood there are four options for dealing with risks.  Risk avoidance, reduction, retention and transference.  Each approach must be considered and decided upon by the risk team.  In general, however, it is in the interest of CompanyX to practice risk retention when possible within the software development practice because our ability to withstand failures internally is generally better than is the industry average causing other forms of risk mitigation to be more costly by comparison.  My retaining risk we have a greater opportunity to leverage our internal skills more effectively.  Obviously this does not apply when risk can simply be avoided.

Summary

CompanyX has pursued process management very diligently in the past and many of our processes and procedures reflect a dedication to research, customization and independence, but we still have a very long way to go.  There is much that we have to learn from the industry and many best practices that we still need to apply.

Building a strong process supporting strong projects requires constant diligence, reevaluation and reassessment.  There are always areas in which improvement is possible and must be pursued.
In this document we have identified areas of improvement that should be addressed such as our lack of organized testing and our lack of support for a testing Practice and a need to implement more formal risk assessment procedures.  New software systems such as continuous integration and discussion forums needs to be installed and made available to the development teams.

In the past year we have been making many strides in the right direction and improvements are very visible.  Perhaps our greatest achievement of the past year is the use of the wiki format for storing procedures, design documents and other forms of organization knowledge in a repository that is not only highly available but also easily searchable.  It is very exciting to see how this resource will be adapted for use in the future.

Whatever process improvements we seek out in the future it is imperative that we remain focused upon the fact that our value lies in our staff and that any process which stymies their desire to create or hampers their ability to remain productive is dangerous and should be avoided.  If we undermine our own value then we have lost the very resource for which we were hoping to be able to manage through process control.  Like any form of management, process management has more ability to damage than it does to accentuate and care must always be taken.

Process management must be a tool for enabling innovation and creativity because it cannot be a tool to create these things.

References

Wells, Timothy D. Dynamic Software Development. New York: Auerbach, 2003.

Brooks, Frederick P. The Mythical Man Month: Essays on Software Engineering.

Graham, Paul. Hackers and Painters: Big Ideas from the Computer Age. Sebastopol: O’Reilly, 2004.

DeMarco, Tom & Lister, Timothy.  Peopleware: Productive Projects and Teams.  New York: Dorset House, 1987.

Rainwater, J. Hank.  Herding Cats: A Primer for Programmers Who Lead Programmers.  New York: APress, 2002.

Spolsky, Joel.  Joel on Software.  New York: APress.

Spolsky, Joel.  More Joel on Software.  New York: Apress, 2008.

Mason, Mike.  Pragmatic Version Control: Using Subversion – The Pragmatic Starter Kit Volume 1.  Raleigh: Pragmatic Bookshelf, 2006.

Hunt,Andrew & Thomas, Dave.  Pragmatic Unit Testing in Java with JUnit – The Pragmatic Starter Kit Volume 2.    Raleigh: Pragmatic Bookshelf, 2003.

Clark, Mike.  Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications – The Pragmatic Starter Kit Volume 3.  Raleigh: Pragmatic Bookshelf, 2004.

Hunt, Andrew & Thomas, Dave.  Pragmatic Programmer, The: From Journeyman to Master.  New York: Addison-Wesley Professional, 1999.

Spolsky, Joel & Atwood, Jeff.  Stack Overflow Podcast, Episode 23 – Seven Mistakes Made in the Development of Stack Overflow Segment.

Beck, Kent.  Test-Driven Development by Example.  New York: Addison-Wesley, 2003.

Beck, Kent & Andres, Cynthia.  Extreme Programming Explained: Embrace Change.  New York: Addison-Wesley, 2004.

Hass, Kathleen B.  From Analyst to Leader: Elevating the Role of the Business Analyst – The Business Analyst Essential Library.   Vienna: Management Concepts, 2008.

Cockburn, Alistair.  Writing Effective Use Cases.  New York: Addison-Wesley, 2000.

Ruby Module Test::Unit.  http://ruby-doc.org/stdlib/libdoc/test/unit/rdoc/classes/Test/Unit.html

Vermeulen, Allan; Ambler, Scott W.; et. al.  Elements of Java Style, The.  New York: Cambridge University Press, 2000.

Baldwin, Kenneth; Gray, Andrew; Misfeldt, Trevor.  Elements of C# Style, The.  New York: Cambridge University Press, 2006.

Ambler, Scott W.  Elements of UML 2.0 Style, The.  New York: Cambridge University Press, 2005.

Knuth, Donald.  “Literate Programming” in Literate Programming, CSLI.  1992.

Wikipedia Contributors.  “Capability Maturity Model Integration”.  http://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration November, 2008.

Wikipedia Contributors.  “Six Sigma”.  http://en.wikipedia.org/wiki/Six_Sigma  November, 2008.

Wikipedia Contributors.  “Lean Software Development”.  http://en.wikipedia.org/wiki/Lean_software_development  November, 2008.

Wikipedia Contributors.  “Risk Management”.  http://en.wikipedia.org/wiki/Risk_management  November, 2008.

Adams, Scott.  Dilbert Principle, The.  New York: Collins, 1997.

Poppendieck, Mary; Poppendieck, Tom.  “Why the Lean in Lean Six Sigma”. http://www.poppendieck.com/lean-six-sigma.htm  November, 2008.

Product References

OpenFire Instant Messaging Server
Spark Instant Messaging Client
Yahoo! Zimbra Email and Groupware Server
SSL-Explorer VPN Server
DokuWiki:
Microsoft SharePoint
Alfresco CMS
Vanilla Forum
phpBB Forum

]]>
https://sheepguardingllama.com/2008/11/the-perfect-development-environment-manual/feed/ 0
Robert Dewar on Java (and College) https://sheepguardingllama.com/2008/08/robert-dewar-on-java-and-college/ https://sheepguardingllama.com/2008/08/robert-dewar-on-java-and-college/#respond Mon, 04 Aug 2008 23:43:10 +0000 http://www.sheepguardingllama.com/?p=2478 Continue reading "Robert Dewar on Java (and College)"

]]>
Two recent interviews with Prof. Robert Dewar of NYU, Who Killed the Software Engineer and The ‘Anti-Java’ Professor, have recently been popular on the web and I wanted to add my own commentary to the situation.  These interviews arise from Dewar’s article in the Software Technology Support Center: Computer Science Education: Where are the Software Engineers of Tomorrow? As someone who takes his role on a university computer science / computer information systems professional review board very seriously, I have spent much time considering these very questions.

Firstly, Prof. Dewar is hardly alone in his opinion that Java, as an indicator of the decline of computer science education in America, is destroying America’s software engineering profession.  The most popular example of someone with similar opinions would, of course, be the ubiquitous Joel Spolsky (of Joel on Software fame) in his Guerrilla Guide to Interviewing or in Stack Overflow Episode 2.

The bottom line in these arguments is not against Java but about the way in which colleges and universities teach computer science.  Computer Science is an extremely difficult discipline, but universities will often substitute simple classes for core CS classes.  Dewar states that this is widely because enrollment has dropped off in these programs as the field is less attractive and students choose lower-hanging educational fruit.  Universities put pressure on the departments to increase enrollment, often by lowering standards and eliminating hard requirements.  However, difficult programming classes like deep C or Assembler, require more highly trained, and therefore expensive, resources so this too causes academia to avoid teaching such categories.  A trained C or C++ developer has much better compensation prospects in the “real world” than they do in academia.

Java itself is a great language and no one, in this case, is saying that Java is not or should not be popular in real world development.  But Java is a language designed for rapid software creation and includes a staggering amount of built in libraries.  Almost anything truly difficult has been addressed by Sun’s own highly skilled developers already and does not require reworking by a working developer.  Working with Java requires only a rudimentary knowledge of programming.  This, by its very nature, makes using Java as a learning environment a crutch.  Learning to program in Java is far too easy and many, perhaps most, programming concepts can be easily avoided or perhaps missed accidentally.  (Anything that I can say here could apply to C# as well.  Both are great languages but extremely poor for teaching computer science.)

Far too often university computer science programs teach no language but Java.  Computer science students need many things including deeper system knowledge and a more widespread knowledge of different languages.  Computer science programs need to stop focusing on single, limited skill sets and start teaching the field of CS, and students need to stop accepting the easy way out and demand that their schools live up to the needs of the workplace!

While, by and large, I agree with Dewar whole-heartedly, he does have one comment that I find very disturbing – although very unlikely to be wrong.  He mentions, in more than one place, that Java is inappropriate as a “first language” as if computer science students at NYU and other universities are learning their first programming languages in college! This is an incredibly scary thought.

I guarantee that international students looking at careers in software engineering or computer science wouldn’t think of entering university without a substantial background in programming.  I can’t imagine a school like NYU ever considering such a case.  If we are allowing the entrance bar to be set so low than can we even possibly consider what we teach when apparently it matters very little?  Would we accept college students who didn’t do algebra in high school?  Didn’t speak English?  Know no history?  Failed physics?  How then could we possibly consider allowing non-programmers into what should be one of the most difficult possible collegiate programs available, and how can we expect good, proficient programming students to learn something of value when forced to learn alongside new learners?

Dewar’s argument for the necessity of a higher standard of collegiate computer science education is that by dumbing down the curriculum and handing out meaningless degrees to anyone willing to pay for them (hasn’t this been my argument against the university system all along?) we are fooling outselves into believing that we are training tomorrow’s workforce when, instead, we are simply accelerating the rate of globalization as developing countries see a massive opportunity to invest in core disciplines and outpace the United States at breakneck pace.  Software development is a field with very little localization barrier inherent to the work and is a prime candidate for offshoring due to the nature of the work and the advanced communications commonly associated with its practitioners and the higher level of skills generally present in its management.  But by created a gap in the American education system we are making a situation occur that simply begs to be globalized as our own country is mostly unable to produce qualified candidates.

Lacking from many discussions about computer science curriculum is the need to discuss the range of IT curricula in programs such as IT and Computer Information Systems.  Computer Science is a very specific and very intense field of study – or so it is intended.  Only a very small percentage of Information Technology professionals should be considering a degree program in CS.  This is not the program for administrators, managers, network engineers, analysts, database administrators, web designers, etc.  Even a large number of programmers should be seriously considering other educational avenues rather than computer science.

There is a fundamental difference in the type of programming that a comp sci graduate is trained to perform compared to a CIS graduate, for example.  CIS programs, even those targetting programming, are not designed around “system programming” but are generally focussed around more business oriented systems often included web platforms, client side applications, etc.  CS is designed to turn out algorithm specialists, operating system programs, database programmers – the kind of professionals that companies like Microsoft, Oracle and Google need in droves but not the type that the 300 seat firm around the corner needs or has any idea what to do with.  Those firms need CIS grads with a grasp of business essentials, platform knowledge and the ability to make good user interfaces rapidly.  These are very different job descriptions and the best people from either discipline may be pretty useless at the other.

All of this points to the obvious issue that companies need to start thinking about what it means to higher college graduates.  If all but a few collegiate programs are allowing CS programs to be nothing more than a few advanced high school classes in Java – why are we even looking at college degrees in the highering process?  Highering practices need to be addressed to stop blindly taking university degrees as having some intrinsic value.  We are in an era where the universities are wearing the emporer’s new clothes.  Everyone knows that the degrees are valueless but no one is willing to say it.  The system depends on it.  Too many people have invested too much time and money to admit now that nothing is being taught and that students leaving many university programs are nothing more than four or five years behind their high school friends who went straight to work and developed a lifelong ability to learn and advance rather than to drink beer while standing on their heads and spent their parents’ or borrowed money.

Computer Science departs need to start by developing a culture of self respect.  Teaching Java is not bad but a CS grad should have, perhaps, one class in Java and/or C# not a curriculum based around it.  Knowledge of leading industry languages like Java is important so that students have some feel for real world development but a CS degree is not preparing many students for work in Java based application development but for systems programming which is almost exclusively in C, C++ or Objective-C.

]]>
https://sheepguardingllama.com/2008/08/robert-dewar-on-java-and-college/feed/ 0
AppleTV Architecture https://sheepguardingllama.com/2008/07/appletv-architecture/ https://sheepguardingllama.com/2008/07/appletv-architecture/#comments Fri, 11 Jul 2008 20:09:46 +0000 http://www.sheepguardingllama.com/?p=2444 Continue reading "AppleTV Architecture"

]]>
If you spend any time reading Apple’s literature you will discover that they have an intended architecture for their AppleTV devices.  I was surprised to learn that Apple’s idealized concept for their media device was so completely different from how I had envisioned its use.

Apple sees the AppleTV as a centralized media consumption device.  Obviously the AppleTV is targetted for tech-savvy home users and, from what I have seen from Apple’s officialy advertising, they expect a multi-computer home to have iTunes running on several computers (in bedrooms, home office, etc.) and a single AppleTV unit places in the home’s “media center” location attached to a large screen display and surround sound audio system.  Under this design the AppleTV is the media consumer interface to all of the home’s computing resources.

I am sure that for many potential AppleTV customers, especially those already very much entrenched in the ubiquitious use of Apple’s iTunes, that this model may make sense.  A family of four could have a Mac or a PC located in the parents’ bedroom, in each of the children’s bedrooms each running iTunes and containing the individual users’ personal audio and video files.  Then a single AppleTV device placed in the living room or den and hooked to a big screen LCD high-definition television display and a surround sound audio system could be used for serious viewing or family time and the individual computers could be used for personal viewing or listening.

This model makes a lot of sense, especially in a home where all users have computers available to them and each person is likely to want to maintain their own repository of media.  In many cases I believe that this may not be the optimum approach.  This “centralized AppleTV – decentralized media” approach leaves much to be desired by my reasoning for the average media consuming family.

My proposed architecture is based on the theory of “decentralized AppleTVs – centralized media.”  I feel that more often it will be a better use of resources to have many AppleTVs located throughout the home wherever media consumption is desired.  For example, having an AppleTV in each bedroom and in the living room and/or den.  Then, to support the AppleTV units, one single Mac or PC computer running iTunes would be used as a centralized “media server” so that all files are managed from a single location.  This gives each AppleTV throughout the home access to the entire family media archives very simply.

Of course you can use Mac desktops running FrontRow to replace specific instances of the AppleTV.  This can allow for mixing and matching additional functionality as needed without disrupting the base home media architecture.  This system allows every room to use movies and music through a dedicated “entertainment” machine while the desktop computers, if they exist, can be used solely for computing and will not have to share resources – most notably screen real estate – with video content.

Storage of media under Apple’s proposed architure requires each computer user to choose, store and protect their own media.  This means that each computer must be treated as a valuable resource and required dramatically more long term media management.  It also means that there is a likelihood of media duplication throughout the house.  If every family member wants to be able to watch Disney’s The Little Mermaid when they are going to bed at night then each computer has to have its own copy of the movie.  It only takes a handful of movies before this causes significant storage bloat.

Under my proposed architure you can simply use the “media server’s” internal disk for media storage, or if you grow beyond that point you can install a larger drive or just attach external hard drives.  If you have serious storage needs then you can back the iTunes application with an external storage system such as a NAS device.  Consumer grade NAS devices start under $1,000 and it is not financially unreasonable to move to custom server-based storage solutions which can easily hit 14TB today and will scale far beyond this in the near future.  (For reference, a typical new desktop machine today holds around .16TB with the largest drives being just 1TB – so 14TB is a significant amount of storage.)

Possibly the biggest advantage of having centralized media storage is that backups are very, very simple.  There is no bloat as there are not multiple copies of the same files floating around in different locations, and backups are only necessary from the media store (either the local drive, the external drives or the NAS device.)

In a previous article I discussed using the AppleTV as a means of controlling content being made available to children.  Apple’s architecture does not really take this advantage of their own system into account, but under my architecture children can safely have an AppleTV installed into their bedrooms with them having unlimited access to it without any worries that they will be able to access unintended content using it.

]]>
https://sheepguardingllama.com/2008/07/appletv-architecture/feed/ 1
Choosing a Linux Distro in the Enterprise https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/ https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/#respond Thu, 10 Jul 2008 17:27:22 +0000 http://www.sheepguardingllama.com/?p=2337 Continue reading "Choosing a Linux Distro in the Enterprise"

]]>
Linux is popular in big business today. No longer, and not for a long time now, has Linux been the purview of the geek community but it is a solid, core piece of today’s mainstream IT infrastructure. That being said, Linux is still plagued by confusion over its plethora of distributions. This being the case I have decided to weigh in with some guidance for businesses looking to use Linux in their organizations.

For those unfamiliar with the landscape, Linux is a family of operating systems that are generally considered to fall under the Unix umbrella although Linux is legally not Unix just highly Unix-like. Individual Linux packages are referred to as distributions or distros, for short. Unlike Windows or Mac OS X which come from a single vendor, Linux is available from many commercial vendors as well as from non-profit groups and individual distribution makers. Instead of there being just one Linux there are actually hundreds or thousands of different distributions. Each one is different in some way. This creates choice but also confusion. To make matters even worse some major vendors such as Red Hat and Novell release more than one Linux distribution targeted at different markets, and within a single distribution will often package features separately. This myriad of choices, before even acquiring your first installation disc does not help make Linux uptake in companies go any faster.

In reality the choices for business use are few and obvious with a little bit of research. To make things easier for you, I will just tell you what you need to know. Problem solved. Now if managing your Linux environment could be so easy!

Before we get started I want to stress that this article is about using Linux for enterprise infrastructure – that is, as a server operating system in a business. I am not looking into desktop Linux or high performance computational clusters and grid or specialty applications or home use. This article is about standard, traditional server applications that require stability, up time, reliability, accessibility, manageability, etc. If you are looking for my guide to the “ultimate Linux desktop environment”, this isn’t it. Desktops, even in the enterprise, do not necessarily have the same criteria as servers. They might, but not necessarily so.

When choosing a distribution for servers we must first consider the target purpose of the distro. Only a handful of Linux distros are built with the primary purpose of being used as a server. If your distro maintainer does not have the same principles in mind that you do it is probably best to avoid that distro for this particular purpose. Server distributions target longer time between releases, security over features, stability over features, rapid patching, support, documentation, etc.

In addition to targeting the distribution in harmony with our own goals we also need to work with a company that is reliable, has the resources necessary to support the product and has a track record with a successful product. Choosing a distribution is a vendor selection process. There are three key enterprise players in the Linux space: Red Hat, Novell and Canonical.

For many Red Hat is synonymous with Linux, having been one of the earliest American Linux distributions and having been a driving force behind the enterprise adoption of Linux globally. Red Hat makes “Red Hat Enterprise Linux”, known widely as RHEL, as well as Fedora Linux. Red Hat is the biggest Linux vendor and important in any Linux vendor discussion.

Novell is the second big Linux vendor having purchased German Linux vendor SUSE some years ago. Novell makes two products as well, Suse Linux Enterprise and OpenSUSE.

The third big Linux enterprise vendor is Canonical well known for the Ubuntu family of Linux distributions. While the Ubuntu distro family includes many members we are only interested in discussing Canonical’s own Ubuntu LTS distribution. LTS stands for “Long Term Support” and is effectively Canonical’s server offering. Their approach to versioning and packaging is quite different from Red Hat and Novell and can be rather confusing.

Before we become overwhelmed with choices (we have presented five so far) we have one here that we can further eliminate. Red Hat’s Fedora is not an “enterprise targeted” distributions. This is a “testing” and “community” platform designed primarily as a desktop and research vehicle and not as a stable server operating system. To be sure it is extremely valuable and a great contribution to the Linux community and has its place but as server operating system it does not shine. Nevertheless, without Fedora as a proving ground for new technologies it is unlikely that Red Hat Enterprise Linux would be as robust and capable as it is.

We can also effectively eliminate OpenSUSE.  OpenSUSE is the unsupported, community driven sibling to Novell SUSE Linux Enterprise.  However, unlike Fedora which is an independant product from RHEL, OpenSUSE is the same code base as SUSE Linux Enterprise but without Novell’s support.  This is a great advantage to the SUSE product line as there is a very large base of home and hobby users in addition to the enterprise users all using the exact same code and finding bugs for each other.  Going forward we will only consider SUSE Linux Enterprise as support is a key factor in the enterprise.  But OpenSUSE, for shops not needing commercial support from the vendor, is a great option as the product is the same, stable release as the supported version.

So we are left with three serious competitors for your enterprise Linux platform: Red Hat Linux, Novell Suse Linux and Ubuntu LTS. All three of these competitors are solid, reliable offerings for the enterprise. Red Hat and Novell obviously have the advantage of having been in the server operating system market for a long time and have experience on their side. But Canonical has really made a lot of headway in the last few years and is definitely worth considering.

Red Hat Linux and Suse Linux Enterprise have a few key advantages over Ubuntu. The first is that they both share the standard RPM package management system. Because RPM is the standard in the enterprise it is well tested and understood and most Linux administrators are well versed in its functionality. Ubuntu uses the Debian based package format which is far less common and finding administrators with existing knowledge of it is far less likely – although this is changing rapidly as Ubuntu has become the leading home desktop Linux distribution recently.

In general, Red Hat Linux and Suse Linux Enterprise have more in common with each other making them able to share resources more easily and giving administrators a broader platform to focus skills upon. This is a significant advantage when it comes time to staff up and support your infrastructure.

Ubuntu suffers from having a directly tie to a “non-enterprise” operating system that is particularly popular with the desktop “tweaking” crowd.  Unlike Red Hat and Suse, Ubuntu is coming at the enterprise from the home market and brings a stigma with it.  Administrators trained on RHEL, for example, tend to be taught enterprise type tasks performed in a business like manner.  Administrators with Ubuntu experience tend to be home users who have been running Linux for their own desktop and entertainment tasks.  This makes the interview and hiring process that much more difficult.  This is in no way a slight against the Ubuntu LTS product which is an amazing, enterprise-ready operating system which should seriously be considered, but shops need to be aware that the vast majority of Ubuntu users are not enterprise system administrators and their experience may be mostly from a non-critical desktop focused role.  It is rare to find anyone running RHEL or Suse Linux in this manner.

In my own experience, having software popular with home users in the enterprise also brings in factors of misguided user expectations.  Users expect the enterprise installations to include any package that the users can install at home and that update cycles be similar.  This can cause additional headaches although the Windows world has been dealing with these issues since the beginning.

At this point you have probably noticed that choosing either Suse and Ubuntu leaves you with the option of both free and fully supported versions, direct from the vendors.  This is a major feature of these distributions because it provides a great cost savings and greater flexibility.  For example, development machines can be run on OpenSUSE and production machines on Suse Enterprise lowering the overall cost if full support isn’t necessary for development environments.  You can run labs from free versions for learning and testing or only pay for support for critical infrastructure pieces.  Or, if you are really looking to save money or feel that your internal support is good enough, running completely on the free, unsupported versions is a viable option because you are still using the stable, enterprise-class code base.

Red Hat, as a vendor, does not supply a freely available edition of Red Hat Enterprise Linux.  Instead, they make their code repositories available to the public and expect interested parties to build their own version of RHEL using these repositories.  If you are interested in a freely available version of RHEL, look no further than CentOS.

CentOS, or the Community ENTerprise Operating System, is a code identical rebuild of RHEL.  It is identical in everyway except for branding.  CentOS is completely free – but unsupported.  CentOS is used in organizations of all sizes exactly like a free copy of RHEL would be expected to be used and many businesses choose to run CentOS exclusively.  As RHEL is the most popular Linux distribution in large businesses and as the commercially support version is rather expensive, CentOS also provides a very important resource to the community by allowing new administrators to experience RHEL at home without the expense of unneeded support.

Choosing between the Red Hat, Suse and Ubuntu families is much more difficult than whittling the list down to these three.  In many cases choosing between these three will be based upon cost, application demands, existing administration experience and features.  It is not uncommon for larger businesses to use two or possibly all three of these distributions as features are needed, but most commonly a single distribution is chosen for ease of management.  All three distributions are solid and capable.

Another potentially deciding factor is if your enterprise is considering using Linux on the desktop.  While RHEL can be used as a desktop operating system it is generally considered to be substantially weaker than Suse and Ubuntu when it comes to desktop environments.  Because of this, Fedora is generally seen as Red Hat’s desktop option but this is not supported by Red Hat nor does it share a code base with RHEL causing support to be somewhat less than unified even though to two are very similar.

For mixed server and desktop environments, Suse and Ubuntu have a very strong lead.  Both of these distributions focus a great many resources onto their desktop systems and they keep these components very much up to date and pay great attention to the user experience.  For a small company that can manage to use only one single distribution on every machine that they own this can be a major advantage.  Homogeneous environments can be extremely cost effective as a much narrower skill set is needed to manage and support them.

In conclusion: Red Hat Enterprise Linux, Novel Suse Enterprise and Ubuntu LTS, in both their supported versions as well as in their free versions (CentOS in the case of RHEL and OpenSUSE in the case of Novell, Ubuntu uses the same package) all represent great opportunities for the data center.  Do not be lulled into using non-enterprise Linux distributions because they are cool, flashy or popular.  Linux lends itself to being in the news often and to generating excess hype.  None of these things are good indicators of data center stability.  The data center is a serious business component and should not be treated lightly.  Linux is a great choice for the corporate IT department but you will be very unhappy if you pick your backbone server architecture based on its popularity as a gaming platform rather than on its uptime and management cost.

]]>
https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/feed/ 0
Why AppleTV is Great for Kids https://sheepguardingllama.com/2008/03/why-appletv-is-great-for-kids/ https://sheepguardingllama.com/2008/03/why-appletv-is-great-for-kids/#respond Wed, 26 Mar 2008 15:21:21 +0000 http://www.sheepguardingllama.com/?p=2315 Continue reading "Why AppleTV is Great for Kids"

]]>
Television and computers have long been challenges for children. We want to give them television and Internet access in their bedrooms but from a very early age this is, obviously, problematic. Having spent some time about the mode in which the AppleTV operates I believe that this may be a really great solution to this continuing conundrum.

AppleTV

AppleTV is a versatile device that works in several different modes. It has direct Internet access through YouTube. It can play media files that are loaded onto it. And it can play media files provided to it through an iTunes “server” application running on a host computer. It is these later modes that are of the most interest to parents looking for a “controlled” solution for their children.

The first thing to mention is that the AppleTV has very good parental controls built in. With these control parents can do a range of locks including removing all access to YouTube and Internet direct content, removing the ability to access the iTunes Store to obtain new, external material and can control the ratings of movies and television shows that will be allowed even when access to them is permitted. So right away there are a range of options that make the AppleTV safe and simple for parents to provide.

The true versatility of the AppleTV for youngsters comes from its “one level separation” from being directly connected to the Internet. Because there is a complete separation between the AppleTV and content on the Internet it is far easier and more secure for parental supervision to be enforced.

The AppleTV gets its content from an “iTunes Server” – that is a computer on your home network that is actively running iTunes and is paired with the AppleTV. Because iTunes is used to feed media to the AppleTV there is a level of direct control that does not readily exist in other systems. Here the iTunes can be set to subscribe only to trusted channels or not to have any subscriptions at all. iTunes can be set to allow nothing but audio and video files loaded onto it by the parents. This is an extremely simple and effective means of content control far beyond what is possible with a DVD player since any DVD can be put into the player but the AppleTV can allow only that content that is preapproved.

Content on iTunes can be purchased through the iTunes store, purchased elsewhere online or can be generated locally either as home movies or by using tools like Handbrake to convert purchased legacy media into AppleTV ready h.264 files. AppleTV’s native video format, h.264, makes for some extremely small video files at very good quality. Perfect for storing large collections of childrens’ shows.

If access to your entire media collection hosted on iTunes is still too wide of content access (perhaps you have some PG movies in there and want to limit accessibility to just a select few films or television shows) you can choose to lock iTunes so that only content that you explicitly load onto the AppleTV through iTunes sync mechanism will be available. This makes it simple to load a large amount of media and then to limit it on a very granular level for very exacting control.

No matter which method or group of methods that you choose to limit content access the AppleTV is truly an answered prayer for parents looking to provide content access in a safe and simple manner for their children. The ease with which it can be used and the level of security that it offers is really remarkable. And because the device requires no physical contact to operate it can be installed safely out of reach of young children who can operate all of its functionality using nothing more than its small, plastic remote. This will relieve much of the concern over putting an expensive electronic device into a young child’s room or den where accidents will often happen.

Unlike services which are purely Internet streaming in nature the AppleTV’s local caching makes their device also work even with unstable Internet connection or even in situations where there is no connection at all. This type of media device will operate surprisingly like a DVD jukebox when pre-cached with content. Children could have as much as 160GB of media sitting ready to go at any time for themselves or for watching with their friends without needing intervention from you.

The AppleTV really represents an opportunity to feel confident about having control of children’s content availability in an age of much uncertain access.

]]>
https://sheepguardingllama.com/2008/03/why-appletv-is-great-for-kids/feed/ 0
Utilities Are Localized Monopolies https://sheepguardingllama.com/2008/03/utilities-are-localized-monopolies/ https://sheepguardingllama.com/2008/03/utilities-are-localized-monopolies/#respond Tue, 25 Mar 2008 18:23:21 +0000 http://www.sheepguardingllama.com/?p=2313 Continue reading "Utilities Are Localized Monopolies"

]]>
As a technology worker I suppose that I am exposed to the issues of utilities and localized monopolies much more often than the average person is. I am always surprised when I come across someone who is not aware that their utilities and infrastructure services are, by their very definition, monopolies within their local area. Utilities of this nature include services such as roads, water, sewer, electric, gas, broadcast television, radio, traditional telephone and cable services. Each of these service, by its nature, can only be provided once to each normal residential address. There are physical limitation making it impossible or impractical to provision competing services and in each case doing so would cause major disruptions, increase cost, etc.

Roads are possibly the easiest to visualize since we see them every day. For most people there is only one road that passes near enough to their property, assuming that they own or rent property, to allow direct access. Even if another road exists nearby it is often not accessible without crossing other people’s property lines to reach it. For the average person having a “backup” access road to their home is simply not possible.

More importantly than the theoretical ability to access a second road (since we could mandate that all houses be built with a road on either side – at massive additional cost financially and environmentally) is the improbability that we could manage a system where one company would own and manage one set of roads and another would own and manage the second set of roads so that every resident would have the choice of whose roads to drive on. At best the road directly at your driveway would be clear but as soon as you reached an intersection there would be a dispute as to whose responsibility the intersection was. Each family would need to choose which road system they were going to access and pay road maintenance fees for repairs, snow removal, insurance, etc. just for the one that they use. That company would then need to pay access fees to the alternate road company so that you would have the right to visit friends across town who opted to use the primary road carrier – the one that you didn’t choose. At some point you will need to switch onto their roads to get into your friend’s driveway. Remember that the choice is to which road system you can access. Just because the road is next to your house doesn’t mean that you are allowed onto it – it is simply the competitor’s product.

The same situation would be true of water. What if you want a company to compete with your town’s water supply. Perhaps they will offer cleaner water at a premium price or cheaper water but that is only good enough for washing the car. Sounds like a great deal. But now a new set of water mains has to be dug under your entire city. That isn’t going to make people happy. And a second water treatment facility will have to be built somewhere in town. And every yard, yes even yours, will have to be dug up to allow the water hookup to be brought to your house. And if you think that the price of water will go down because of competition keep in mind that all of this infrastructure cost money and now each water treatment facility only processes half as much water meaning it takes more people and more equipment to process the same amount of water. Prices have to go up. Inconvenient and more expensive.

It is because of these factors that you have never heard of a village offering competitive road or water services – imagine the disaster with competing sewage systems! Villages, towns and cities almost ubiquitously oversee all key utilities of this nature because it is in everyone’s interest that everyone have clean, safe water, efficient sewers and safe roads. It keeps the population healthy and allows everyone to go to work. These utilities are so obvious and have been around for so long that every village knows exactly how to perform these services and how to do them very efficiently.

We begin to see problems arise when we start looking at core infrastructure services that have only existed for the last century or so. Principally this means electrical, gas, telephone and cable. These services, because they required additional capital investments, connect to additional infrastructure outside of the village or town and require greater technical knowledge have almost purely been left to the purview of private industry generally operating under strict regulations.

Electrical power supply is the oldest of the “new” infrastructure services and, as such, has the most potential to be taken over and managed by the municipality itself. It is not uncommon to find small towns and jurisdictions that have decided to take their power needs “in house” and run their own power plants and maintain their own infrastructure. In many cases this proves to be very beneficial to the local residents as overall costs are often lower and service is local and friendly instead of being handled by some far away corporation. It can also generate local jobs that are stable and reliable. Local power plants are generally not able to take advantage of hydroelectric or nuclear power, however, so they are not always the best option. But the potential is there and with new wind and solar technologies today there could be more potential for this in the future. We must be aware, though, that one of the cost saving measures in small town power management often comes from having no research and development whatsoever which will produce short term gains at long term expense. Large electric companies spend a lot of money making sure that they power is safe, cheap and reliable for a long time to come.

As we move towards newer and more “technology” focused services we move farther and farther away from a general understanding from the overall populace and we also move farther away from municipalities feeling that they should bring these services “in house.” This feeling, I believe, comes from three primary issues. The first is that telephone and cable are massively more complex than even electric generation which causes municipalities to need more extensively trained, and therefore paid, staff for a rather small-scale deployment. The second is that these services are newer and have a greater sense of being “optional” rather than “required” services like water, sewer or electric. The third is that these system inherently must connect to the outside world or they have no meaning. Other key services can, under ideal circumstances, exist completely within the borders of the jurisdiction and operate quite satisfactorily.

Telephone and cable services fall prey to the same issues affecting our other infrastructure components. Even though it is feasible to bring two sets of telephones lines and two sets of cable lines through a town this results in a conflict for right-of-way access which is a complex issue, it creates an unsightly mess in many areas and it decreases revenue potential for all businesses involved which is fine in urban areas but would result in a complete loss of service in rural areas.

The current telephony monopoly situation originated when AT&T was given an almost total monopoly but was required to provide the same service at approximately the same cost to its urban and rural customers. Urban customers in areas with high telephone termination density would pay slightly more than the service would be expected to cost and rural customers would pay the same. But AT&T took a loss on rural telephone terminations under this system making up the cost in their guaranteed urban profit centers. If telephone providers were forced to compete in the urban areas they would be under no obligation to provide service to “profit loss” centers and would not choose to do so.

Some municipalities have decided to compete with the incumbent local carriers and have provided their own telephone and cable services. These services generally are technological dinosaurs, however, and roll out at very high cost with very few features. Few local regions have the capability to supply these services at a level competitive with large technology companies that service the major markets. This situation is likely to change over time as the technologies involved become increasingly commonplace and as convergence removes the need for as many overlapping services.

In today’s Internet dominated communications world we actually have arrived at a situation with far more choice than we have had for the past several generations. Because both the traditional telephone infrastructure as well as the cable television infrastructures and even to some degree the cellular phone infrastructure can carry Internet access to our homes we have, for the first time, have the ability to choose between competitors for a core infrastructure service. These competition is simply the result of redundant legacy technologies being replaced with a converged modern technology. If the Internet had come first there would never have been two separate telephone and cable television systems and all of those services would have been delivered over a single Internet access line and people today would be furious at the thought of stringing another entire set of cables up in the sky overhead. But those decisions were made long ago in a different era.

This competition of services has proved to be very good for us today and not only gives us the opportunity to choose and change Internet access suppliers but also to purchase duplicate services providing ourselves with a degree of reliability that did not exist for either service individually. In some rare areas Internet access is even available or has been proposed to be made available through the electrical power distribution system providing a third vector for access to our homes. Multi-service Internet access is now commonplace enough that major vendors such as Netgear now sell home router/firewall units that are designed to aggregate service across dual connections to provide better speed and reliability simply and automatically.

So the unfortunate situation that we find ourselves in is that there is no good answer for infrastructure services.  We must either submit to socialized control of these services by municipalities and regional authorities which leaves us with generally lower prices as the cost of development, advancement and options or we can allow private corporations to run these utilities where we are “forced” to hand over monopolistic controls in the hopes that regulations will keep prices and services in line.  The risk of either approach, of course, is that our access to critical services and, in some cases, information and our view of the outside world is controlled by agencies and companies for whom there is no true competition.

As technology service become more commonplace I believe that we have a great opportunity for convergence and socialization again.  As some rare regions have done, telephone and cable infrastructure can be brought “in house” through heavy investment in fiber optic networking allowing all services of this nature to be delivered with higher service levels, greater safety and at lower long-term cost through a single, small cable.  Municipalities that choose to go this integrated services route will find that they can leverage scale for cost effective Internet access through a few competitive long-haul carriers, allow residents to choose “telephone” services from Internet VoIP carriers that must compete on price and service, lower the power requirements providing additional cost savings and safety and greatly reduce the number of cables that must be strung through their regions.

For a relatively small investment a village, for example, could make Gigabit speed fiber optic connections available to every single resident of the village for a fixed fee and allow competing “cable television” companies to house their distribution systems within the village’s cabling hub giving residents the right to choose which television provider to choose or to choose none at all.  Telephone service could be purchased from a large number of carriers or residents could build their own telephony systems and even bypass those competitive carriers.  Only the core Internet access service – the base on which all else is derived – would be “owned” and management by the community providing a minimum amount of infrastructure for a maximum amount of services.

]]>
https://sheepguardingllama.com/2008/03/utilities-are-localized-monopolies/feed/ 0
Are You Vista Capable? https://sheepguardingllama.com/2008/03/are-you-vista-capable/ https://sheepguardingllama.com/2008/03/are-you-vista-capable/#respond Thu, 20 Mar 2008 16:48:50 +0000 http://www.sheepguardingllama.com/?p=2295 Continue reading "Are You Vista Capable?"

]]>
Following my last article on Microsoft’s Windows Vista operating system and its review from the New York Times I felt that I should provide my own insight into the state of Windows Vista. I have been using Windows Vista for almost a year now. I am an IT professional and an early adopter of most technologies so I start using new operating systems a bit before the general public should consider looking at them. My main operating system is Novell’s OpenSUSE Linux 10.3 which is, in fact, newer than Windows Vista and my secondary machine is Windows XP Pro SP2.

(Warning, what is about to follow is anecdotal evidence as to the state of Vista from my own, limited first hand observations. But it could be worse, it could be second hand and out of context.)

My first attempt to work with Vista was on a dual-core AMD Turion X2 laptop. My hope was that with Vista it would finally make sense to run the operating system in 64-bit mode as Windows XP Pro 64-bit was a bit lack-luster. In Windows XP driver support had been extremely poor and I was unable to get much of anything to work. So all of my Windows XP machines ended up staying as 32-bit while my Linux machines moved back and forth. On Linux almost everything worked great as 64-bit. Only rarely would I get a driver issue or compatibility problem.

For the first week or so Windows Vista was incredibly slow. I decided that trying the 32-bit version of Vista (both had shipped with the laptop, thankfully) might be a good idea. So I performed a clean re-installation of Vista and started again.

Under Vista32 I noticed a significant increase in the overall speed and stability. The whole system seemed to hum right along now without the apparent slowness that I had had in 64-bit mode. Vista32 seems to work exceedingly well and starts and stops more reliably than my Windows XP machines have done in the past. The reliability of the shutdown process has been a major concern of mine from past Windows editions.

Because of the types of applications that I generally use on Windows (e.g. not video games, not entertainment applications, mostly serious business and management applications, only current versions, etc.) there were no compatibility issues in moving to Vista. Not a single application has failed to run and, I am told, that the only game that I actually would care about (Age of Empires 2 circa 1998) will run beautifully in Vista. I have a friend who has tested this on three separate Vista machines.

Few applications that are programmed “correctly” using Microsoft’s published standards and industry best practices have any issues moving to the Vista platform, in my experience. All of the complaints that I have heard about applications not working are either video games – which seldom follow platform guidelines, ancient legacy applications or small independent vendor applications that always fail to work between platforms because there are no updates, standards aren’t followed, etc. It happens. Every new operating system breaks a certain amount of old applications but in many cases, most cases, this is simply a separating of the wheat from the chaff. It is good to shake up the market and point out the weaklings in the herd and thin it out a bit for everyone’s long term health. Think of it as software genetics in action.

For contextual reasons I should point out that I have been using client side “firewalls” – a term that I am loathe to use but has become somewhat of the norm – for a long time, first with Symantec and more recently with Microsoft’s Live OneCare – and am quite familiar and comfortable with the concept of unblocking ports for every new application that is installed or any changes that are made. I am also used to this through the use of AppArmor on SUSE Linux and SELinux on Red Hat Linux.

Already being used to this as a matter of course makes the transition to Vista’s security system almost transparent. I have heard numerous complaints about the barrage of security notifications popping up and asking it “this software should be allowed to install” or if “such and such a port should be allowed to open” but if people were diligent about using past operating systems this would neither be new nor a surprise. This type of checking is wonderful in the computer security nightmare world in which we live. Many people want this “feature” suppressed but these are often the same people asking for continuous help to fix their virus and Trojan horse riddled computers caused, not by malicious external attacks, but by bad computer management habits and behaviours.

Even as a technology professional who is constantly installing and uninstalling applications, doing testing, making changes, fiddling with the network, etc. the number of these security alerts is not quite annoying enough to push me past the point of appreciating the protection which it provides. A normal user, who should not be installing new software or making network changes on a daily basis, should see these messages mostly only during the initial setup of the workstation and then somewhat rarely when new software or updates are applied. If this security feature is becoming annoying due to its regularity one must carefully ask oneself if there isn’t a behavioural issue that should be addressed. It is true, some users need to do “dangerous” things on a regular basis to use their computer the way that they need to use it. But these people are extremely rare and can almost always manage these issues on their own (turning off the feature, for example.)

Some people have had issues with the speed of their Vista machines. All of the complaints that I have heard to date, however, come from people who have moved from Windows XP to Windows Vista on the same hardware. This is not a move that I would suggest. Yes, Vista is slower than XP and noticably so. Just as XP was somewhat slower than Windows 2000 (although not very dramatically as 2000 was so slow. XP may not actually even be slower than 2000!) Windows 2000 was dramatically slower than Windows NT 4 and requires many times more system resources. The jump from the NT4 to the NT5 family was, by far, the biggest loss of performance that I have witnessed on these platforms. The move to Vista is minor.

The fact is that moving to newer, more feature rich, operating systems almost necessitates that the new operating systems will be slower. Each new generation is larger than the generation before. Each new version is more graphics intense (not true with Windows 2008 Core – yay!) and has power-hungry “eye candy” that demands faster processors, more memory and now graphics offload engines. Users clamour for features and then complain when those features cause their operating systems to be larger and more bloated. You can’t have both. If you want a car with one hundred cubic feet of hauling capacity the car absolutely must be larger than one hundred and four square feet in surface area. Period. It’s math. End of discussion. This isn’t Doctor Who – the inside can’t be larger than the outside. And your operating system can’t have less code than the sum of its components.

If I have one major complaint about Windows Vista it is the extreme difficulty with which one must search for standard management tools within the operating system. Under previous editions of Windows one could go to the Control Panel and find commonly used management tools in one convenient place. Now simply modifying a network setting – a fairly common task and impossible to research online when one needs it most – is nearly impossible to find even for full time Windows desktop support professionals. The interface for this portion of the system is cryptic at best and nothing is named in such a manner as to denote what task could possibly be performed with it.

Altogether I am very pleased with Vista and the progress that has been made with it and I am looking forward to seeing the improvements that are expected to come with the first Service Pack that should be released very soon. Vista is a solid product and Microsoft should be proud of the work that they have done. The security has been much improved and I hope that Vista proliferates in the wild rapidly as this is likely to have a positive effect on the virus levels that we are currently seeing.

Caveat: Moreso than previous versions of Microsoft Windows, Vista is designed to be managed by a support professional and used by a “user”. Vista is somewhat less friendly, out of necessity, and the average user would be better serviced to simply allow a knowledgeable professional handle settings and changes. Vista pushes people towards a “managed home” environment that would be more akin to a business environment.

This change, however, is not necessarily bad. As we have been seeing for many years, the security threats that exist with regular access to the Internet are simply far too complex for the average computer user to understand and with the number of computers in the hands of increasingly less sophisticated computer users the ability for viruses and other forms of malware to propagate has increased many fold. A computer user who does not properly protect his or herself from threats is not only a threat to themselves but to the entire Internet community.

In a business we do not expect non-technology professionals to regularly management their own desktops and perhaps we should not expect this of home users. Computers are far more complex than a car, for example, and only advanced hobbyist or amateur mechanics would venture to do much more than change their own oil. Why then, when a computer can be managed and maintained completely remotely, would we not use the same model for our most complex of needs?

With some basic remote support to handle the occasional software install or configuration change, automated system updates, pre-installed client side “firewall” all that is truly needed is a good anti-virus package and a normal home user could use their Windows Vista machine in a non-administrative mode for a long time with little need from the outside while enjoying an extreme level of protection. The loss of some flexibility would be minor compared to the great degree of safety and reliability that would be possible.

]]>
https://sheepguardingllama.com/2008/03/are-you-vista-capable/feed/ 0
NY Times vs. MS Windows Vista https://sheepguardingllama.com/2008/03/ny-times-vs-ms-windows-vista/ https://sheepguardingllama.com/2008/03/ny-times-vs-ms-windows-vista/#comments Mon, 10 Mar 2008 20:01:24 +0000 http://www.sheepguardingllama.com/?p=2294 Continue reading "NY Times vs. MS Windows Vista"

]]>
People often wonder why I am so adamantly opposed to the established journalistic media outlets. Often people will claim that some papers, such as the “illustrious” New York Times or the Wall Street Journal, are exceptions to the continuing trend of soft journalism. But I contend that these two papers may actually be some of the worst offenders – perhaps even using their long standing positions of being “above reproach” to allow for even greater lack of professionalism and to allow bias in their reporting.

In a recent New York Times article “They Criticized Vista. And They Should Know” author Randall Stross, professor at San Jose State University, uses skewed anecdotal evidence and out-of-context examples in a blatant attempt to bias the reader against Microsoft’s latest operating system, Windows Vista. Whether this has occurred simply because the author does not understand the material, because the New York Times has its own political agenda or because they have been paid to reverse-advertise by the competition I cannot say. But for some reason the “illustrious” New York Times is using its position as a media outlet to serve to the detriment of honesty and to mislead the readers who have been mislead into paying for what proves to be little more than a tabloid.

Mr. Stross begins his article by presenting the issue of Vista’s slow adoption rate. He acts as though its adoption rate is unexpected or not appropriate for a new operating system. However, given Windows XP’s presence in business, longevity, stability and feature set it is not surprising or unexpected, in the least, that Vista – not having yet reached Service Pack 1 – would have a very slow adoption rate. Each new operating system generation has to contend with a lesser and lesser value proposition to people updating and it has been seven years since the last major round of Microsoft operating systems – almost an eternity in the IT industry.

Vista also has a new kernel architecture (the first of the Windows NT 6 family as opposed to the NT 5 family that we are used to with Windows 2000 – NT5, Windows XP – NT5.1 and Windows 2003 – NT5.2) and therefore has many hurdles to cross that have not been seen since the migration from Windows NT 4 to Windows 2000. Additionally, this is the first major NT to NT family kernel update to hit the consumer market. Earlier NT family updates happened almost entirely within businesses where these processes are better understood and preparations happen much, much earlier. This is the first major consumer level change since users were slowly migrated from the Windows 9x family (95, 98 and ME) to the NT family (2000, XP) which happened over a very long time period. Users should recall the large number of headaches that occurred during that transition as few applications were compatible across the chasm created by the new security paradigm.

Mr. Stross takes the approach that Microsoft needs to answer for the natural slow adoption of a new, somewhat disruptive technology, but this is ridiculous. Vista market penetration is expected to be slow within the industry and no one is wondering why it hasn’t appeared on everyone’s desktops or laptops yet. Vista technicians are still being trained, bugs are still being found, issues still being fixed, applications are still being tested and Service Pack 1 is still being readied. I have Vista at home but I am an early adopter. I don’t expect “normal” (read: non-IT professionals) to be seriously considering Vista updates themselves until later this year.

Our author then asks the question “Can someone tell me again, why is switching XP for Vista an ‘upgrade’?” Actually, Mr. Stross, in the IT world this is what is known as an “update”, not an “upgrade”. An update occurs when you move from an older version to a newer version of the same product. An upgrade occurs when you move to a higher level product.

Windows XP Home to Windows Vista Home Basic is an update. Windows Vista Home Basic to Windows Vista Home Premium is an upgrade. Windows XP Home to Windows Vista Home Premium is, in theory, both. Please do not mislead consumers by claiming that Windows Vista is an upgrade. It is not. Windows Vista is simply the latest Windows family product for consumer use.

If you have Windows XP and it is meeting your current needs why would you go the route of updating? I have no idea. I think that people need to answer that question before having unreasonable expectations of any new software product. Windows XP is still supported by Microsoft and will be for a very long time.

If I may make a quick comparison, moving from Windows XP Home to Windows Vista Home Basic is like moving from a 2002 BMW 325i to a 2007 BMW 325i. This is not an upgrade. It is simply an update. Just a newer version of the same thing. Sure, some things change between the versions but no one would consider this to be a higher class of car. If you want a higher class get yourself a 760i.

Mr. Stross goes on to regale us with horror stories of Vista updates gone wrong. In each of the cases what we see is a confused consumer who felt that, contrary to Microsoft’s recommendations and contrary to any industry practice, they could simply purchase any edition of Vista and expect any and every piece of software that they owned to work. This is not how Windows, or any other operating system, functions.

In the first example, Jon A. Shirley – former Chief Operating Officer, President and current board member at Microsoft – updates two home computers and then discovers that the peripherals that he already owned did not yet have Vista drivers. Our author does not mention whether or not Mr. Shirley checked on the status of these drivers before purchasing Windows Vista nor does he complain about these unknown third party vendors not providing Vista drivers. It is implied in the article that it is Microsoft’s responsibility to provide third party drivers. It is not. Drivers are the responsibility of the hardware manufactures. Hardware compatibility is the responsibility of the consumer. In neither case is Microsoft responsible for third party drivers. It may be in their best interest to encourage their development but they are not Microsoft’s responsibility.

In the next example we see Mike Nash – Vice President of Windows Product Management – who buys a Vista-capable laptop. This laptop would have been loaded with Windows XP but capable, as stated, of running at least Windows Vista Home Basic when it would become available. It is absolutely critical to keep in mind that Windows XP Home’s direct update (not upgrade) path is to Windows Vista Home Basic.

When Mr. Nash attempted to update his laptop to Vista we are told that he was only able to run a “hobbled” version. What does “hobbled” imply? We can only assume that it means that he can run Windows Vista Home Basic as we would expect. What has handily been done here is that one version of Windows Vista has been considered “hobbled” and another is considered “not-hobbled” even though consumers must pay for the features between the versions – an upgrade. It a BMW 325i hobbled because the BMW 335i has a bigger engine but requires more fuel?

It is also mentioned that Mr. Nash is unable to run his favourite video editing software – Movie Maker. It is true that the edition of Movie Maker that comes with Windows Vista has some high requirements that may have kept Mr. Nash from being able to run the version of Movie Maker included in the Windows Vista box. But Microsoft makes a freely downloadable version of Movie Maker for Windows Vista specifically for customers who have run into this limitation. So this is not even a valid argument.

It is implied that Microsoft mislead consumers by stating that the laptop was Vista-capable, but we are not told that Windows Vista did not install successfully nor work properly. What is being done here is the application of unreasonable expectations on Microsoft. Microsoft has stated extremely clearly since long before Windows Vista was released to the public that there would be different versions and that many of the features had specific hardware requirements beyond the base requirements. The features in these higher-end editions were upgrade features not included in the basic Windows Vista distribution.

This begs the questions “Could Microsoft have done more to inform their customers of the Windows Vista requirements?” Perhaps. But the answer is not as easy as it seems. As it was, these requirements were incredibly well known and publicized. The issue that we are dealing with is consumers, including some inside of Microsoft, who did not check the well publicized details and had unreasonable expectations in this situation. Much like the often heard story of the purchase of a video game that requires an expensive high end graphics card that the purchases does not posses. That application has higher requirements than Windows Vista Home Basic so why shouldn’t Windows Vista Ultimate Edition not have higher requirements too?

It is unfortunate that so many consumers have difficulty understanding computers enough to be able to purchase them effectively. It is also unfortunately that many choose to ignore requirements that are clearly stated because it is too much effort. But in neither case can Microsoft be held to a higher level of expectation than any other company in the same position. If a Linux based desktop operating system was being purchased the same problems would have applied. Some features would require a more powerful machine and some are very complicated.

A key issue here is that because these two pieces of anecdotal evidence come from high-ranking Microsoft insiders we treat them as if they are more important than normal consumer issues. The fact is that these two Microsoft employees did not do the same level of consumer diligence that I would expect of anyone buying something so expensive and complex as a new computer. Computers are complex and desiring to “future proof” your purchase requires some careful forethought and planning.
We are also not seeing the whole picture. Perhaps Mr. Nash and Mr. Shirley were purchasing Vista intentionally without putting in any forethought to see what problems the least diligent segment of customers were likely to run into and were using this information to allow Microsoft to attempt to fix their problems even though it was not Microsoft’s responsibility to do so. In this case Microsoft should be being praised for being willing to put so much effort into fixing things that are not their problem just because it makes for happier customers.

I am most unhappy that this article’s use of two pieces of out-of-context anecdotal evidence and using them as a basis for the implication that Vista is not yet finished – by calling it “supposedly finished” without any justification whatsoever. This is called “leading”. Clearly Windows Vista was finished, shipped and is used by many people. But now the reader is lead to believe that it is not finished even though it is not actually stated by the author. This is not the job of journalism – to decide on a verdict and indicate to the reader the way in which they should think. While not strictly lying the intent is to mislead ergo making the intent – to lie.

Even worse is the blatant falsification that “PCs mislabeled as being ready for Vista when they really were not” which is completely and utterly untrue and clearly intentional defamation and libel. It is never said that Windows Vista did not run on any machine stated here as being capable of running Windows Vista. It is simply implied that some upgrades to higher editions of Windows Vista were not possible.

The article wraps up with a look at the timeline of the decision process in the labeling of machines as being Vista-capable. We can see that internally Microsoft was torn as to which direction to go but chose, in the end, to label all machine capable of running Windows Vista as being Vista-capable.

I understand that there are many reasons why Microsoft may have wanted to mislead consumers (for the consumer’s own good) into buying overpowered new hardware just to feed the coffers of their hardware partners by only labeling a machine Vista-capable if they were able to run the high-end, expensive upgraded versions that would only be of interest to more affluent or intensive users.

Nevertheless, Microsoft resisted misleading consumers and labeled the computers accurately and did not use the Vista release as an opportunity to push hardware prices higher. They labeled their computers honestly and accurately. Labeling them in any other way would actually have been misleading and would have been of questionable intent.

At least poor consumers were not told to buy expensive computers just to find out that a much less expensive model would have sufficed to run Windows Vista! Microsoft would most definitely have been accused to misleading customers in that case. Those customers for whom the price of the computer was most difficult to manage were the ones protected the most.

The article ends asking “where does Microsoft go to buy back its lost credibility?” But the real question is after so blatantly attacking Microsoft without merit, where does the New York Times and San Jose State University professor Randall Stross go to buy back their credibility?

]]>
https://sheepguardingllama.com/2008/03/ny-times-vs-ms-windows-vista/feed/ 1
CPUs, Cores and Threads: How Many Processors Do I Have? https://sheepguardingllama.com/2008/03/cpus-cores-and-threads-how-many-processors-do-i-have/ https://sheepguardingllama.com/2008/03/cpus-cores-and-threads-how-many-processors-do-i-have/#comments Fri, 07 Mar 2008 17:25:07 +0000 http://www.sheepguardingllama.com/?p=2291 Continue reading "CPUs, Cores and Threads: How Many Processors Do I Have?"

]]>
In my job role I am very often called upon to determine how many “processors” a machine has or how many we will need for a specific task. Ten years ago this was a simple process but today even the very concept of the “processor” is fuzzy at best and only a very few people have a clear understanding of what it means. I spend much of my time explaining, as best as possible, the terms needed to even discuss processors today as everything processor related must be seen in context.

Before we begin let us look at the terms involved in discussing processors starting from the bottom of the stack. On the bottom we have a chip carrier, this can be something as simple as a motherboard (a.k.a. mainboard, systemboard, MoBo), a processor daughtercard or a dedicated chip carrier. Any of these will qualify for our use. A chip carrier holds sockets. Chip carriers can have a single socket or many.

On a standard desktop or laptop computer we would expect to find that we have a single motherboard containing a single socket. On a mid-sized server such as the HP DL360 G5 we see a single motherboard with two sockets. On a larger server such as the Sun SunFire x4600 we see several daughtercards each with a single socket but with a total of eight sockets available in the overall system.

The Intel Pentium III Slot A chips are a perfect example of a dedicated chip carrier. In the case of the Slot A Pentium III processors the processor itself was mounted directly onto a small daughterboard that was dedicated for the purpose of carrying the Pentium III processor and its associated voltage management electronics. This small card was enclosed in plastic for protection and would be attached directly to the motherboard.

Above the layer of the chip carrier is the socket. A socket is a physical connector allow a chip to be connected to a board. Occasionally a chip as important as the CPU will be connected to a board without a socket. This is more common when dealing with embedded systems and is exceedingly rare in general purpose computing. In this case the connection itself could be considered analogous to the socket. The use of the socket in this explanation can be confusing because of its questionable interpretation but it is important in its inclusion because of the need to identify potential system capacity and classification which is done, normally, at this level.

It is in the counting of sockets per computer that we determine that maximum “way” of a server. For example the DL360 mentioned above is classified as a two-way server. And the x4600 is an eight-way server. This is the case when the server is at capacity. A particular server would be classified by the number of sockets in use. For example a DL385 with just one socket occupied could be considered a one-way server but with extra potential capacity. By adding another chip to the second socket we are said to be upgrading from a one-way to a two-way. Many server vendors have started advertising the “way” of their servers based on non-socket based factors but this practice is non-standard and highly misleading. Be sure to compare servers based on socket capacity and not on advertised “way”.

Each socket is capable of holding one physical processor. While sockets are purchased with the board to which they are attached, a chip can be purchased already in a socket or as a standalone product. Processors are often sold in stores in boxes just like any other product and are the most visible form of “processor” that consumers will face. This is the only “processor” that can be seen visibly, held in the hand, bought as an item in a store, etc. This is the physical manifestation of processing power. Just as socket count determines the maximum “way” of a computer the processor count will determine the current “way” of that computer. Most consumers or desktop administrators think of processors in terms of the physical chip. If the term processor is to have an official usage this is the level at which it is most appropriate. Common examples of a processor include the AMD Opteron, Intel Core, Intel Pentium II, Sun UltraSpark IV or IBM Power6.

The most important industry recognition of this “level” being the “processor” is Microsoft, Oracle and most major software vendors using this definition of processor to determine their per-processor licensing requirements. Because of this stand on the definition of processor and its long history of use mostly in this context we are likely to see the word processor remain linked to the physical entity.

Each processor chip can have one or more die carried within it. A die is not visible as it is encased in the protective material of the processor. The die consists of the semi-conductive substrate and is a discrete electrical element within the processor. A die is the most difficult portion of a processor to define, in my opinion, as it is completely invisible to anyone unless they break apart a processor and even so they are extremely difficult to see because of their size and density.

A CPU, or Central Processing Unit, is, and has been, generally tied to a die. One die contains one single CPU. A die and a traditional CPU are, roughly, synonymous. Technically an important difference remains because a die can contain components in addition to the CPU such as support processing. In a more general sense, a die can contain other types of integrated circuits other than a CPU so the two words are not the same thing even though they effectively are when we are only discussing general use processors – CPUs. Strangely it is at this level that we have achieved the term CPU used so commonly but so extremely misunderstood.

Within a single CPU there can be one or more processing cores. A core is the real workhorse of the processor stack. It is within a core that the actual processing work is done. It is most common, today, for a CPU to contain only one core. There is a common misconception that this is not the case due to marketing efforts to convince people otherwise. Internal processor architecture should not be used as a marketing tool as it is simply confusing and misleading. Only a holistic view of processor performance characteristics can provide adequate comparisons when deciding on a processing platform. No single architectural element will have an impact large enough to be usable as a determining factor in processor selection. But more importantly it is not feasible for anyone who is not a chip architect with a solid grounding in IC design concepts to even remotely grasp the intricacies involved in the design of a microprocessor.

In traditional processors, like the Pentium III, there is one core per CPU. This is very simple. In many modern processors such as the Intel Core or the Intel Core 2 there is still only one core per CPU while there are multiple CPUs per processor (each CPU is on a discrete die within the processor.) So an Intel Core 2 Duo would be a single processor with two die each with one CPU each with one core. This gives a total of two cores per processor. It is multi-core as well as being multi-CPU. Technically the term multi-core should not apply here as that is only useful in a different and important context. In the AMD Opteron processor we see a single processor with a single die and single CPU with two cores within that single CPU. In this case we have a multi-core single-CPU configuration. This is a true multi-core processor. Multi-core within a single die/CPU is an important distinction because it varies the ability for components to communicate amongst themselves. The most confusing thing here is that Intel product is named “Core” while being based on multi-CPU technology. This has lead to a proliferation in the misuse of the term core.

Cores are still an extremely important component to use in normal system discussions, however. Cores are discrete processing elements and therefore represent a very important look at our computers. By looking at cores we can see how many independent parallel actions can be taken by the processors at one time. This is very important for understanding the scaling and capacity abilities of our computers. A computer can only truly parallelize to the extent of its “core” capacity.

The final layer of our stack that we need to examine is that of the multithread (a.k.a. Hyper-Thread, SuperThread, etc.) The most well known example of this is Intel’s implementation of such used in their Pentium 4 derived XEON processors. In current use the Sun UltraSparc T family of processors are the poster-children for multithreading. Multithreading does not truly add additionally parallelism to the processing structure but it can be used, under certain loads, to make the processing pipeline more efficient and to push multiple threads of execution into the processor roughly simultaneously. Multithreading is complicated but in the absolutely simplest terms (and possibly the most useful to the layman looking to grasp the correct use of this technology) it can be though of as allowing the processor to manage thread execution and scheduling instead of leaving this solely to the operating system. In reality what is performed is vastly most complicated than this.

Multithreading is useless for single-threaded workloads and its mere presence will degrade performance. Multithreading is most useful for highly threaded workloads. It is currently seeing a lot of positive use in the areas of web servers and databases. To transfer decision making from the operating system to the multithreading portion of the processor, an MT processor presents each of its thread processors to the operating system as a separate “logical processor”. It is at this point that we finally see the concept of processor as viewed by the operating system. This “logical processor” is what we view in Microsoft’s PerfMon or TaskMgr or in top on Linux. Often this is what we think of as being the processor.

Now that we have been bombarded with terms, layers and models we will look at a few examples to help determine how we should approach the classification of processors. We will look at the HP DL360 G2, the HP DL585 G2, the HP DL580 G4, the HP and the SunFire T2000.

In our first example we will look at the very traditional and standard Hewlett-Packard/Compaq Proliant DL360 G2. This server has a single motherboard containing two processor sockets. Each socket accepts one Intel Pentium III-S processor (up to 1.4GHz.) At this level we can identify this server as a true two-way server. Each Pentium III-S processor contains a single die / CPU. Each CPU has one core and each core is natively threaded with no multithreading capabilities. So, in total, this server is a two-way server with two processors, two CPUs, two cores and two logical processors to present to the operating system. Very simple, very straightforward. Just as we expect a computer to behave.

Our second example is the Hewlett-Packard Proliant DL585 G2. This server has four processor sockets on its motherboard making it a true four-way server. Each socket can hold an AMD Rev F Opteron Dual-Core processor. Each Opteron, in this scenario, has a single die with a single CPU. Each CPU has two cores and each core has only the native thread handler providing a total of one logical processor per core. So our total is four-way, four die / CPU, eight core and eight logical processors presented to the operating system.

Our third example is the Hewlett-Packard Proliant DL580 G4. The Proliant DL580 G4 has a four socket motherboard capable of holding four Dual-Core Intel XEON 7000 series processors. This, like the DL585 G2, is a true four-way server when fully populated. Each XEON 7000 processor contains dual dies / CPUs and each CPU contains one core for a total of two cores per processor. Each core has a single native thread handler. So our total is four-way, eight die / CPU, eight core and eight logical processors presented to the operating system.

My desktop example is the Hewlett-Packard Compaq DeskPro d530. This desktop unit has the option of using the Intel Pentium 4 HyperThread processor which is what makes it interesting for our purposes. We will use this processor in our example. The DeskPro d530 has a motherboard that supports a single Pentium 4 (or Celeron 4) processor. Like most desktops this is a one-way machine. Each Pentium 4 processor has a single die / CPU with a single execution core. Each core on a traditional Pentium 4 (or Celeron 4) can execute just a single thread but, in our example, we will use the HyperThread version of the P4 which can handle two simultaneous threads presenting two logical processors to the operating system. So we have a one-way desktop with a single processor with a single CPU containing a single core with two mulithread handlers presenting two logical processors.

To make this analysis more complicated we must also be aware that because of single thread performance problems on the Pentium 4 HT platform it was very common for HyperThreading to be disabled on this processors through a BIOS setting. In these cases the threading model returns to native and only a single logical processor is presented to the operating system. This is the only example, of which I am aware, of a processor having a selectable number of presentable logical processors. The efficacy of using the HyperThread features was based upon operating system and load characteristics. For example, Windows 98SE or ME running on the d530 could not even see the second logical processor because it only has a uni-processor kernel option. So HyperThreading is not even possible. With Windows 2000 or XP both logical processors were visible and usable but some workloads, such as most video games at the time, could not take advantage of it while many business workloads would. Each user would have to determine which mode made the most sense for them adding to the complexity of the situation.

Our final example is the Sun SunFire T2000 server. The SunFire T2000 is a single socket motherboard designed to hold one UltraSparc T processor. This is a true one-way server. Each UltraSparc T processor has a single die / CPU. Each CPU contains either four, six or eight cores depending on the purchased configuration – we will use eight in our example. Each of these eight cores has four thread handlers. In this machine we therefore see a one-way server with a single processor with a single CPU containing eight cores and a total of thirty-two simultaneous multithreads being presented to the operating system as thirty-two logical processors.

As computer systems continue to increase the number of logical processors being presented to the operating system the importance of efficient process and thread handling by the operating system kernel will continue to become more and more important.  Many traditional systems have not been able to handle multi-processor situations very efficiently, if at all, but today with the number of available logical processors skyrocketing even in desktops the need for good process and thread handling across a potentially large number of logical processors is extremely important.

As you can see the issue of determining the number of processors, cores, CPUs, etc. is extremely difficult.  It is clear why people have become confused and why marketing is playing such a significant role in determining the public’s perceptions of these architectural components.  The most important components to keep clear are the counts for way, processor, core and logical processor (virtual processor, processing thread, execution engine, etc.)  Underlying component issues, while important to be semantically correct and to understand the working of processors, are still underlying components and should not be thought of as being the defining characteristics of our computer systems today.

]]>
https://sheepguardingllama.com/2008/03/cpus-cores-and-threads-how-many-processors-do-i-have/feed/ 5
Project Management of the RMS Titanic and the Olympic Ships https://sheepguardingllama.com/2008/02/project-management-of-the-rms-titanic-and-the-olympic-ships/ https://sheepguardingllama.com/2008/02/project-management-of-the-rms-titanic-and-the-olympic-ships/#comments Wed, 27 Feb 2008 05:04:29 +0000 http://www.sheepguardingllama.com/?p=2275 Continue reading "Project Management of the RMS Titanic and the Olympic Ships"

]]>
The idea to build the R.M.S. Titanic and her sisters, the R.M.S. Olympic and the H.M.H.S. Britanic, first began to take shape in 1907. These three ships together were White Star Line’s Olympic Class ocean liners. (I will use the Olympic(s) in reference to the class of vessels throughout this text for the sake of clarity.) Few vessels in human history have become as well known and infamous as the R.M.S. Titanic.

In examining the R.M.S. Titanic from a perspective of project management it is important to first identify what type of product this project was set to produce. Unlike many projects where the final customer will own the final product the Titanic was designed to deliver a service, particularly a ferry service, to its end customers. This creates an interesting challenge in discussing “Project Titanic” since most views of project management see a project as having a discrete beginning and end as well as clear, well-defined stakeholders.

In the case of a project such as the R.M.S. Titanic we can take two views and approach the issue from two sides. In one case we have the project by which the three ships of the Olympic class were conceived, designed, built and delivered to White Star Lines. In the other case we have the R.M.S. Titanic as it was customized beyond the extend of its elder, the R.M.S. Olympic, completed in initial production and delivered, as a service, to the passengers which it was to ferry between Southampton and New York. To maintain scope I will not discuss the even larger project of testing, bug fixes, repairs, scope changes and enhancements that were applied to the two sister ships after the sinking of the R.M.S. Titanic. Both the R.M.S. Olympic and the H.M.H.S. Britanic saw many changes during their years of service including the re-scoping of the Britanic from the role of an ocean liner to that of His Majesty’s Navy’s principle hospital ship during World War 1 and the outfitting of the Olympic with a double hull and additional life boats as the crew refused to sail her until she had been made more safe. (“Olympic”)

It is my goal here to examine the Titanic as a service from conception to service delivery and, ultimately, service failure. From this perspective the Titanic can be treated much like one would treat a modern Software-as-a-Service (SaaS) project. Because of the nature of a ship such as the Titanic or SaaS products such as Salesforce.com or SugarCRM we need to consider the intended lifespan of the product and the ongoing upgrades and maintenance that will be needed to keep it operating. The Titanic requires a huge staff of pilots, seamen, chefs, porters, stewards and more while at sea and required re-outfitting, repairs and, had she survived, she would have needed a new double-hull like the R.M.S. Olympic received. A SaaS project will similarly require a staff to maintain the datacenter and networking, ongoing upgrades and bug-fixes, new features, etc. In both the case of the Titanic and of a SaaS project there is a real potential for a service disruption that could prove to be extremely costly. Maintaining steady, reliable operations is a major factor in the success of either of these business plans.

Let’s begin our analysis of the project to bring the Titanic to fruition by examining the stakeholders. We can easily identify the passengers and crew of the Titanic as stakeholders, White Star Lines as a company as well as the project engineers, Harland-Wolff as the constructors, Alexander Carlisle and Thomas Andrews as shipwrights and designers at Harland-Wolff, Captain Edward John Smith who was responsible for service delivery and finally White Star director Joseph Bruce Ismay and his executive staff who were in the role of project sponsor. In any project of this size there will be many important players all of whom have some stake in the project. We will focus just upon these key people as the most prominent stakeholders in the Titanic project.

The Titanic project most closely resembled a Waterfall Model in IT Project Management parlance. The process started with a high level concept coming jointly from Joseph Bruce Ismay representing White Star Lines and Lord James Pirrie representing their ship building partner, Harland-Wolff. The project was conceived jointly between the two companies. The Titanic would offer great prestige and profit potential for both firms and would require a large investment from both. While we do not appear to have the original project charter available to us today we can view the meeting between Ismay and Lord Pirrie as the unofficial project charter and the initiation of the project. (Sadur)

The technical design of the Olympic class ships was undertaken by Harland-Wolff head shipwright, Alexander Carlisle, after high level plans had been drawn up by Ismay and Lord Pirrie. Carlisle was the lead designer on the project from its inception until 1910 when he retired and turned over the lead design role to Thomas Andrews the managing director of Harland-Wolff (Brander 1998). Andrews would be responsible for the final stages of the Titanic’s design. The Olympic, launched in the fall of 1910, would have been most likely completely designed under the direction of Carlisle. Since the Titanic shared almost all of the infrastructure of the Olympic (hull design, compass placement, lifeboats, bridge, etc.) we can safely assume that Andrews’ contributions to the design of the Titanic were mostly aesthetic or, in software development terms, “interface” related. (Thinkquest)

Because of the construction-like nature of shipbuilding and especially with mammoth ships such as the Olympics the design process involves heavy upfront design with very limited feedback loops later on once the designers can physically inspect the ship. In software terms this is referred to as “Big Design Up Front” or BDUF. In software where changing requirements are common this is generally considered to be a very bad practice but in mechanical and structural engineering this is simply the only reasonable solution.

As work progressed on the Olympics several decisions were made regarding the core infrastructure design of the ships. This is especially dangerous as the methodology in place for this type of project was not designed to allow for changes of this nature once the plans were approved. A ship is designed as a holistic system with interdependent safety systems and a high degree of complexity. Unlike most software it is very difficult to rapid-prototype a ship to the degree of accuracy needed to ensure safety. Making key changes to safety systems would have required a full scale reworking of the specifications to be done correctly. But since changes were made because of cost savings, timeline issues and luxury appointments there was little holistic thinking put into the changes. (Kozak-Holland)

The original design and intent of the Olympics was that they would incorporate the very latest technologies for safety. After the initial design was complete business pressure from White Star Lines and especially J.B. Ismay was put onto the architects and engineers to remove safety features in favor of first class luxuries. The two most famous changes were, of course, the removal of the lifeboats so that the views from the cabins would not be unnecessarily obscured and the modification of several of the bulkheads to no longer seal and to rise just ten feet above the water level in order to allow for an expanded grand ballroom facility. As with IT projects, engineering decisions for core systems are generally beyond the ken of business executives. Allowing business side decisions to influence otherwise technical decisions is dangerous as the usual precautions and thought processes are bypassed and expertise is overlooked. In this case these were issues pertaining to the care and preservation of human life. In software we seldom have such an important task at hand but allowing business managers without understanding of key systems to be involved in their design can be exceedingly harmful even if the result is as benign as loss of business or greater cost of operations.

One of the most dramatic problems that arose from late changes to the Olympics’ designs were that the changes, each being small on their own, were most likely never taken together and examined as a whole in the same manner that the original ship design was considered. When the lifeboats were removed the engineers involved were thinking that the ship was designed to be a floating life raft and that the purpose of the life boats was simply to ferry passengers from a motionless ocean liner to another ship in the “worst case scenario” that the Titanic or Olympic were to lose power. Even a collision was expected to only make them float low in the water, not to sink. The life boats were removed until only the minimum legal number remained at the behest of White Star Line’s executive committee. To the engineers this would have been an acceptable, although not recommended, safety trade-off. The design of the ship was such that having additional life boats was not a legal requirement nor was there any driving need to keep them as the usefulness of the additional boats was believed to be minimal. In the end, it would not have been the decision of the engineers but of White Star who was the final customer to the shipbuilders and whose decisions provided them with their livelihoods.

On its own the decision to remove the life boats would, most likely, have been a minor one. But additionally the decision was made to change the key structural design of the Olympics from having all tall bulkheads to having four of them be significantly lowered to just ten feet above the waterline meant that the concept of the ship as being a floating life raft was compromised. The bulkheads were never truly water tight as the marketing materials would have lead us to believe but they were quite tall and very water “resistant” and would have most likely been able to keep water from traveling between them under even a very serious hull breach. As the initial design of the ship was replete with safety features this would have been considered, like the life boats, to be a redundant feature and its removal would have only been lowering the ship to more common safety criteria. Taken individually the changes were mostly innocuous, but taken together the changes became a complete redesign of the ship and completely disastrous. At no point did the qualified engineers go back through and do a complete reassessment of the safety features of the ship with all of the changes in place.

In some respects we could look at J.B. Ismay as being a micro-manager. He did not trust the decisions of those that he hired for the purpose of their technical expertise and overrode, either directly or through indirect pressures, their decisions. Micro-managing has many negative results. The obvious being untrained and unqualified managers influencing decisions that others believe to have come from qualified professionals. Others include eroding the value generated by the project team and degrading employee loyalty and morale.

In ship building, in situations where ships are built of a class such as the Titanic, we need to consider three principle project phases – design and construction of the prototype, the Olympic; design refinement and construction of the Titanic; and finally service delivery and operations. In the case of the Titanic in particular we see that the principle design and any structural changes were made before the completion of the Olympic. Thomas Andrews sailed on the Olympic but this was mostly so that he could make aesthetic modifications for the Titanic as it was far too late for structural work to be changed at that point. For the same reason, Andrews was sailing on the Titanic so that he could do similarly for the soon to be launched Britanic (known to Andrews as the Gigantic.)

In terms of project scope we can see that the project adhered closely to the initially established plan. Construction was done based on preapproved plans with little change. The most dramatic changes in the scope occurred during the construction of the Titanic when the project had to be halted in order to accommodate the repairs of the Olympic. Both parties, Harland-Wolff and White Star Lines, understood that the Titanic would be delayed but that the serviceability of the Olympic took precedence. A major factor in any type of capital construction project is the need for contract level agreements between build phases as scope change is nearly impossible to accommodate once construction has begun. (“Olympic”)

It is difficult to find software projects that closely follow this type of model with a production prototype followed by a series of production implementations based upon the prototype but not identical to it. The closest example of this that I can postulate is the enterprise resource planning (ERP) package SAP. With this package, and others of its ilk, customers buy the package based upon its prototypical performance and features and then either on their own or through a consulting firm or the original vendor will heavily modify the package to their own needs. Generally the advantage to such an approach is that the core of the software package, the infrastructure, is very stable and well tested and often used across a wide degree of situations giving it both project side as well as client side testing. Care must be taken, of course, because customer initiated modifications will not have the benefit of the deep testing that one hopes has been performed on the core infrastructure nor do the changes have the benefit of “many eyes” from the wider client community.

In the case of the Olympic class ships there was serious testing done on the fifteen foot model of the ship and testing was done on the Olympic upon completion. With a ship of the Olympic’s complexity it is critical that real world testing be performed in addition to unit testing of individual systems. The Olympic was put through the usual testing measures that were standard for a ship of this type. However, when the Titanic was built the builders and the cruise line decided that as the Titanic was essentially a copy of the Olympic that the testing done and the ongoing successful use of the Olympic was test enough for the Titanic and the Titanic received very little additional testing. This, however, is not a best practice as mariners know that all ships, even copies, behave differently and each vessel is unique and must be tested on its own. (Kozak-Holland)

The Titanic was given almost no time for testing or performance trials. This came about partially because the Olympic had had a serious accident and had to be taken to the shipyards in Belfast and repaired. While the Olympic was under repairs the Titanic had to wait as only one ship of that size could be worked on at one time. This placed business time constraints on the Titanic as she was scheduled for regular transatlantic route duty and was needed immediately. Because of this, some additional testing that would likely have occurred was skipped and the Titanic was sent out with its primary testing being the journey from Belfast to Southampton, and, even on this leg of its journey, there was at least one paying passenger making it more a low capacity true voyage than a proper test. (Kozak-Holland)

It would appear that White Star Lines and J. B. Ismay were quite willing to take on exceptional project risk in order to get the ship into regular service as quickly as possible. Through standard maritime procedure they mitigated much of their capital risk through maritime insurance. This would protect them against many potential unknowns.

During the last half of the nineteenth century it was becoming increasingly common for both shipping lines as well as governments to see risk as a low priority issue. The S.S. Great Eastern built in 1858 is considered to have been, and proved to be in real world cases, far safer than the designs of the increasingly unsafe ocean going vessels that followed her over the next fifty-four years. Conditions would continue to decline until companies and governments reevaluated the situation in the wake of the Titanic’s sinking. It is argued that shipping companies saw acceptable safety track records as justifying their lackadaisical attitude towards safety over decades of relatively incident-free shipping. Financial market pressures won out favoring companies with loose safety standards encouraging the industry as a whole to move away from expensive risk management. (Brander 1995)

Under the guise of further mitigating risks due to a lack of testing and training, several crew members, most notably Captain Smith, were brought over from the Olympus to sail the Titanic on her maiden Atlantic crossing. This could be seen as Ismay seeking experience which would appear to lessen the “unknowns” derived from sailing a ship without testing her directly but at least having the maximum experience from the prototype vessel. However, this may not be the underlying reason for the decision and it is very possible that Captain Smith was chosen because his position with White Star Lines was rather questionable after he had just recently caused a serious accident involving the H.M.S. Hawke which had caused the Olympic to need emergency repairs and had delayed the sailing of the Titanic. Captain Smith was likely nervous, concerned for his career and in little mind or position to act as the final level of responsibility aboard the ship if pressures from the firm directed him against his better judgment. This may have been exactly the operational loophole that White Star Lines was seeking. This situation was probably exacerbated by Captain Smith coming in too close or too fast to the S.S. City of New York, when moored in Southampton, causing it to break its moorings and nearly collide with the Titanic. (Kozak-Holland)

Under customary maritime law, a captain is the absolute commander of the ship and has complete jurisdiction while at sea even if higher ranking officials, military or civilian, are aboard. The captain has moral and legal responsibilities first to the lives and safety of the passengers and crew and then to the cargo and the ship. (Kuntz)

Once the Titanic was built, outfitted and available for sailing we see a stage change and we move into the service delivery phase of the overall Titanic project. In this stage we are past the traditional stages of project management. In most scenarios a client would now have taken possession of the finished product. But in the case of the Titanic this becomes the service delivery phase.

Under service delivery White Star Lines took responsibility for any new issues that would arise with the ship. At this point the traditional system of design – build – test would no longer be used and instead the service delivery would be overseen by standard operating procedure or SOP. Ongoing ship modifications, repairs, tuning and the like would still continue but these would be designed to not be at the level of requiring a service interruption but would be minor and done without the knowledge of the final customers – the passengers. It is at this stage that the passengers arrive as our most critical stakeholders because, in this scenario, they are not just financial stakeholders but are literally staking their very lives on the reliability of the ship and the operations of the crew.

In the Agile Project Management community there is a fable often used to denote roles within the stakeholders. These roles are known as the pigs and the chickens. The fable tells us of a chicken and a pig who are interested in opening a restaurant together. The pig asks the chicken what they will serve. The chicken replies, “Well, bacon and eggs, of course.” To this the pig responds “I don’t think that I am interested. You will be involved but I will be totally committed.” (Schwaber 7)

The pig and the chicken metaphor is normally used to express the difference between stakeholders who have real money or careers on the line versus stakeholders who have a vested but non-critical interest in the project. The chickens would prefer not to see a project fail but failure is not necessarily devastating for them. In the case of the Titanic we see that the financial stakeholders, Harland-Wolff and White Star Lines, were effectively chickens. They had much to lose but their investment was insured and later, we will see, the government was even willing to protect companies of this nature at this time due to the pending war with Austria and Germany. Neither White Star nor Harland-Wolff were “totally committed” – they had a definite interest and the success of the Titanic was extremely important to them but the passengers and crew of the Titanic were truly the pigs here willing to put their very lives at stake. Seldom is the chicken and pig metaphor more appropriate.

In order to ensure a higher quality of ongoing service a guarantee group from Harland-Wolff was present on the maiden voyage. This team included many important members of the Harland-Wolff design and construction staff including the lead designer Thomas Andrews. This guarantee group was customary on all major projects of Harland-Wolff. This staff would use the voyage time to assess the construction under differing conditions from their tests, gauge customer satisfaction and look for opportunities for improvement. Thomas Andrews had already sailed on the Olympic for this very same purpose and had made many tiny changes to improve the Titanic. He would spend part of this journey, for example, designing less expensive clothing hooks for passenger rooms. (“Guarantee Group”)

The Guarantee Group comprised specialists from many different technical practices within Harland-Wolff. We see representation from the fitters, electricians, joiners, draughtsmen, design team and more. This group, with their varying specialist areas and their differing levels of experience with both seniors and apprentices included, would have been an exceptional cross-section of the project team that built the ship. Their presence aboard with the care given to appraising the workmanship, design and other final components can be seen in two principle ways.

In the first way we can see this as a “port-mortem” performed on the Titanic Project. It was the role of this team to assess the technical success of the project and to look for areas for improvement as well as to generate “lessons learned” so that future projects could benefit from the experience gained here. Considering the cost of the transatlantic trip and the time spent away from their regular duties this was a serious investment in project knowledge by Harland-Wolff and extremely commendable.

Taken in another light, this guarantee group could be seen as providing feedback on a construction iteration. The Olympic’s construction being the first iteration, the Titanic’s being the second and the Britanic’s being the third. In this approach we see a type of Agile feedback loop being utilized, as much as possible, to allow for customer input even on such an extreme capital construction project. The iterations are very long, but this is by necessity. In this way we can view the Titanic as both a project unto itself, being a discrete deliverable, as well as being part of the ongoing project to deliver passenger service via the Olympic family of vessels.

The guarantee group being aboard ship would have presented the opportunity for the technical teams to get a first-hand appreciation of the real world application of their product. Rarely would a technical specialist be in a position, in 1912, to travel on a ship of this level of luxury. Without the sponsorship of Harland-Wolff in providing this chance for its staff to witness their own workmanship at work they might never understand their own roles in providing services to their final customers.

Having apprentices included in the guarantee group meant that direct one-on-one or small group, informal training could be performed. The apprentices and the senior technicians would have had many days to work together, and the apprentices would have had a great opportunity to learn from their seniors in a setting conducive to team building and knowledge transfer. In many ways we can see this time as being similar to off-campus team building sessions or retreats popular with many companies and project groups today.

Where we find the biggest surprise in our analysis of the Titanic is the almost total lack of standard operating procedure in use on board the ship. Some processes and procedures were documented but many were not. Examples of processes that were not standardized included key communications processes such as moving messages from the wireless office to the bridge, alerting passengers to the ship sinking and alerting the bridge that the crow’s nest had seen an iceberg. (Kozak-Holland)

Standard Operating Procedures are absolutely critical in any service delivery situation. In some companies these can even be considered so valuable as to be the core intellectual property of the company. Without the SOP a company is no more cohesive than the inherent “teamness” of the staff which, in cases of new employees, may be nominal. Staff will have to rely totally upon best practices, convention, informal staff instruction or, hopefully, training to learn their jobs and processes. But these will not be standardized if they are not written down and training will inevitably vary from trainer to trainer and no employee can retain all instructions for all possible scenarios.

Under normal conditions the lack of standard operating procedures may be of relatively minor importance. Staff can perform most job functions adequately, especially if trained, without needing an SOP. If they did they would need to carry the SOP with them at all times. When the SOP becomes extremely critical is when “normal operating procedures” are no longer available or, in more modern terms, when operations is no longer under BAU (Business as Usual) conditions. For the Titanic, BAU conditions were broken several hours before the iceberg incident.

In the case of the Titanic it is difficult to discuss standard operating procedures without also discussing communications. So we will begin with communications under BAU conditions and then see how the lack of an SOP caused the situation to deteriorate rapidly.

The Titanic was plagued with communication design flaws from the beginning. The wireless room, responsible for all communications going in and out of the Titanic, was not operated by White Star Lines but was instead staffed by Marconi personnel who were paid to communicate first and foremost for the passengers and for the ship only if time allowed. The wireless operators were overworked and undervalued and would often not relay messages to the bridge because they had other duties that were considered to be of a higher priority by Marconi, their employer. At least eight iceberg warnings were sent to the Titanic’s wireless room but only some of these were relayed to the bridge. (Kozak-Holland)

In this case it is important to emphasize the importance of managing third party contractors via a service level agreement. White Star Lines, in allowing the Marconi Company to staff its wireless room, should have had a clear SLA demanding that safety and emergency communications for the ship take absolute priority over the personal messages of the crew. They should also have not allowed Marconi to make an additional profit or be financially benefited by not following the SLA. As an outside contractor Marconi should have had a contract that was designed for mutual benefit – that is that when executed as stated that it was to all parties mutual benefits to act correctly. Contracts between vendors that give financial incentives for a vendor to work against the good of their client are very unwise.

The most important single incident involving the Marconi operators was the final iceberg warning sent by the SS California, which was extremely near to the Titanic – so near that later it would be the ship to see the emergency rockets from the Titanic. The California radioed the Titanic a warning that it had become trapped in pack ice and could not proceed, at any speed, due to the dangerous conditions. The Marconi operator replied “Shut up, shut up, I am busy. I am working Cape Race and you are jamming me.” It could hardly be made more clear where the priorities of the wireless room lay even with such danger looming so closely. Not only did the wireless room not keep communications open with the California but they also declined to inform the bridge of this final external warning. In frustration the California’s radio operator gave up his attempts to warn the Titanic, turned off his wireless and went to bed. The Marconi operators not only failed to heed the warnings but alienated external channels such that the only ship near enough to rescue them did not respond when the Titanic began to sink. (Kuntz)

Communications from the bridge to the crew at large and the passengers had no official process, was manual and was, at best, done in a best effort manner if it was attempted at all. The bridge did not notify anyone that a collision was immanent and no one was braced for what could have been a very serious impact. Once the ship began sinking it took over an hour before the bridge began informing the rest of the ship that they were going down. Key information affecting the lives of the passengers and crew was kept from them and held by just a few bridge personnel who were most likely hoping to keep the incident a secret or to minimize the publicity to the obvious risk to human life. As there was no system or process for communicating from the bridge to the ship in general this was a simple matter as it took a concerted effort to inform the passengers of any news at all. (Kozak-Holland)

Communications amongst the crew were little better. The Officer of the Watch, for example, was located outside of the bridge but his critical communications links were located inside of the bridge house. So the watch was unable to rapidly communicate to any other bridge personnel or to coordinate with the crow’s nest and other related functional areas. The crow’s nest and the watch were connected by a one way bell system that did not allow them to communicate in duplex and was very slow, and the watch had no means of feedback from the quartermaster at the helm when an emergency command was given. Commands were given by shouting from the open air towards the enclosed bridge. The watch could only hope that the quartermaster inside the bridge had heard, understood and was acting upon those commands. Communications from the standard compass were just as bad. The compass, instead of being located above or near the bridge, was placed far to the aft and the bridge was forced to coordinate with the compass on a regular basis which caused much confusion and delays. Little, if any, thought was put into making the bridge effective or safe. This lack of design for communication certainly did little to help the Titanic when rapid and accurate communications were necessary. (Brown)

When external communications to the White Star office in Boston were finally sent the information relayed was that there was no serious damage and that the incident was very minor. Unlike point to point information that is common today, however, this information was broadcast and could easily be intercepted by other ships and relay stations. Ship-to-shore communications were often used to effectively release information to the press under the guise of an internal communiqué. So instead of relaying honest and critical information the wireless was used as a marketing tool. What was sent was not a distress signal but was effectively nothing more than a press release designed to put “spin” on the situation. (Kozak-Holland)

Communications are key at any stage of any project. In the case of the Titanic the catastrophic situation highlights issues that occurred because of communications, but this is simply a worst case scenario. Projects need to have as much up-to-date and accurate data as possible when making decisions. Without it decisions are made using only a partial available picture and the less correct information available the less likely that a good decision can be made.

Perhaps the greatest project management issue affecting the Titanic, however, was its lack of standard operating procedures. The SOP should have been produced as an essential project deliverable during the later stages of the construction process. For a ship to have been deemed seaworthy when there was no SOP to operate her is truly inconceivable. Even the most agile of development methodologies fails to overlook the need for end-user documentation.

Since the design and construction portions of the project had failed to provide the crew of the Titanic with an SOP or, at least, a reasonable SOP (there were some generic standard procedures in the White Star Lines manual itself) there were no clearly defined rules or processes for dealing with communications, tracking alerts, providing warnings, etc. There was no emergency procedure to be followed and so the crew was forced to act on nothing other than experience and general mariner knowledge.

It is at this point, when examining the actions of the crew under emergency conditions, that we witness the complete breakdown of the command structure. In a traditional business the business executives are generally accepted as having the final decision-making power on any corporate action as long as it falls within legal boundaries, and often when it does not. In the average business a bad operational decision results in loss of revenue not the loss of life. A wise executive will understand the need to not override the decisions of those specialists hired to make operations decisions or possibly a board will require that an executive listen to her staff. Nonetheless, the idea of business side executives interfering with project operations is against best practices and is widely accepted as being a bad course of action.

In the case of the Titanic, Captain Smith was in command of the vessel at sea and was personally responsible for the ship and the souls aboard. His boss, J.B. Ismay, may have had the ability to have Smith removed from command upon returning to port but at sea he did not nor under British mariner law did he have the right to make commands from the bridge as he was not a licensed mariner. (Kuntz)

During the time leading up to the iceberg collision J. B. Ismay had been pressuring Captain Smith to drive the ship at an irresponsible speed – in excess of twenty-four knots or seventy-five revolutions. The Olympic, being considered the “test” for the Titanic, had never crossed the Atlantic at this speed and the Titanic was now operating even outside of the range of tests performed on the Olympic without even time enough to have performed a single Atlantic crossing under normal conditions. Ismay and Smith drove the Titanic beyond its known performance parameters and, more importantly, beyond the crew’s known operational parameters. It was simply unknown what operational risks would be involved with the ship at this speed. To maintain what should have been considered an unsafe speed while also going into waters known to be strewn with icebergs was extremely foolish.

Whether because of panic, confusion, insecurity, etc. we do not know but when the Titanic struck the iceberg Captain Smith allowed a layman, J.B. Ismay, to come onto the bridge and to begin making executive orders as the acting ship’s master for which Captain Smith had the right and obligation to have Ismay removed. Ismay made key operational decisions including emergency communications, passenger notification and, most importantly, to move the Titanic forward off of the ice shelf which is believed to be the actual cause of the ship’s main rupture and then to continue the ship moving forward at slow speed pulling the hull apart even after additional information was available that the ship was going to sink. (Kozak-Holland)

Given the distance in time that we are now from the Titanic it can be difficult to assess processes followed and to know what went right with the project when we know so much that went wrong. The sinking of the Titanic is so iconic in our minds that to see it as anything but a marketing and organizational disaster is difficult at best.

In the end the Titanic project was immense but well managed. Scope was controlled and changes accommodated when necessary. Large design up front with a well established contractual interface to the construction phase was used and this cementation of the specifications allowed for careful and accurate scheduling. The processes by which the ships were built were standard and well known. Using historical construction data Harland-Wolff was able to accurately predict the time needed for construction allowing White Star Lines to begin marketing and sales long in advance of the actual sailing of the ships. The Titanic, being an almost identical copy of the Olympic, had even fewer surprises. The only true surprise resulted from the change of priorities from White Star Lines that resulted in the Titanic project being put on hold for approximately one month.

In the words of J. Bruce Ismay “She [Titanic] was not built by contract. She was simply built on a commission.” This indicates that exceptional authority was granted to Harland-Wolff to use their own processes and oversight to ensure the delivery of the Titanic. The two companies operated more as partners than in a vendor-customer relationship. (Kuntz)

Project risk for the Titanic was handled poorly relying heavily upon external insurers and, in the end, the British government to protect the company from liability lawsuits at the expense of the primarily British and American passengers. Risk was considered to be very low and because of this many careless decisions were made first with the Olympic and then, when operational disasters were minimal, even moreso with the Titanic. Careful risk assessment was not made. Expert mariners could easily and quickly have defined many risk areas that needed to be addressed. Issues such as the lack of a complete Standard Operating Procedure would have been flagged and could have easily been handled since resources for this would not have needed to have come from the current Titanic team and would not have impacted the delivery date or any of the variables that we understand now to have been of primary concern to White Star Lines.

Communication on the project seems to have been handled very well until service delivery began. At this point design flaws, questionable decisions and the lack of SOP came to bear on the communication network on board the ship. This communication could be considered to be operational and not project based but the argument is semantic. The issues with the Titanic were holistic and with proper design methodologies being followed risk analysis would not have been missed which would have forced the creation of the SOP which would have highlighted or perchance even fixed the communications issues.

At its core the question was one of quality. The Titanic was proposed and sold as the highest quality transatlantic travel option. Quality was heralded, directly or indirectly, in almost every breath spoken about the Titanic. The customer interface was kept as clean and concise as possible. No expense was spared if the results would be witnessed by a customer. But the underlying core or infrastructure of the project (non-functional requirements according to Kozak-Holland – although I do not agree with their use in this context) on which this “quality” was to rest was ignored and the true quality of the Titanic and the quality of the operations of White Star Lines was to become ultimately evident.

Bibliography and Sources Cited:

Schwaber, Ken. Agile Project Management with Scrum. Redmond: Microsoft Press, 2003.

Kuntz, Tom. Titanic Disaster Hearings: The Official Transcripts of the 1912 Senate Investigation, The. New York: Pocket Books, 1998. Audio Edition via Audible.

Kozak-Holland, Mark. “IT Project Lessons from Titanic.” Gantthead.com the Online Community for IT Project Managers. (2003): 22 February 2008

Brown, David G. “Titanic.” Professional Mariner: The Journal of the Maritime Industry. (2005): 23 February 2008

Sadur, James E. Home page. “Jim’s Titanic Website: Titanic History Timeline.” (2005): 23 February 2008.

ThinkQuest Library. “Designing the Titanic.” (Date Unknown): 25 February 2008.

Titanic-Titanic. “Olympic.” (Date Unknown): 25 February 2008.

Titanic-Titanic. “Guarantee Group.” (Date Unknown): 25 February 2008.

Brander, Roy. P. Eng. “The RMS Titanic and its Times: When Accountants Ruled the Waves – 69th Shock & Vibration Symposium, Elias Kline Memorial Lecture”. (1998): 25 February 2008.

Brander, Roy. P. Eng. “The Titanic Disaster: An Enduring Example of Money Management vs. Risk Management.” (1995): 25 February 2008.

Additional Notes:

Mark Kozak-Holland republished his 2003 Gantthead articles on the Titanic into a book:

Kozak-Holland, Mark. Lessons from History: Titanic Lessons for IT Projects. Toronto: Multi-Media Publications, 2005.

More Reading:

Kozak-Holland, Mark. Avoiding Project Disaster: Titanic Lessons for IT Executives. Toronto: Multi-Media Publications, 2006.

Kozak-Holland, Mark. On-line, On-time, On-budget: Titanic Lessons for the e-Business Executive. IBM Press, 2002.

US Senate and British Official Hearing and Inquiry Transcripts from 1912 at the Titanic Inquiry Project.

]]>
https://sheepguardingllama.com/2008/02/project-management-of-the-rms-titanic-and-the-olympic-ships/feed/ 3
Netflix, AppleTV and the End of Television https://sheepguardingllama.com/2008/02/netflix-appletv-and-the-end-of-television/ https://sheepguardingllama.com/2008/02/netflix-appletv-and-the-end-of-television/#respond Thu, 14 Feb 2008 18:25:48 +0000 http://www.sheepguardingllama.com/?p=2261 Continue reading "Netflix, AppleTV and the End of Television"

]]>
I have written before about the downfall of broadcast television – including cable television and other “one to many” legacy distribution systems for video content. I have written that the DVD would be the last big physical media format for movies and that BlueRay and HD-DVD would never have the chance to be as popular because the end of physical media had arrived. They will go down as the last effort of the industry to hold on to a changing marketplace.

I have written these things and have been disputed again and again that television is so dominant and that the idea of getting videos on physical media is so core to our culture that it would be many years if not many decades before these things will change. But I believe that the end is already here. Driven, in part, by the industry division caused by the competing media formats which are too complex for the average consumer to differentiate between, partially because of the poor standards of HDTV and its inability to handle the de facto high definition standard of 1080p, partially because of intentionally misleading marketing and specifications on high definition display products but mostly because the time and technology are right.

There are several technology players who have stepped up to the plate recently to tackle the world of physical and traditional media. I have opined in the past that non-commercial services like YouTube, Google Video, Vimeo and RSS feed based downloadable content from shows like Rocketboom, Wandering West Michigan and others through software like FireANT or Democracy would be the disruptive factors deciding the fate of media. I still believe that they will remain major plays and, over time, will come to dominate the marketplace as people turn away from commercial production finding more niche content delivered in a more personal way to be more valuable. But before that can happen there is an intermediate phase, I believe, in which commercial content will be delivered through next-generation methods and this will remove the underpinnings of traditional media.

Enter Netflix and AppleTV. There are others, of course. And some that came earlier. Amazon Unbox covers much of the same ground. But Netflix and AppleTV look to be the most disruptive and visible of the players in this new content delivery space.

The first serious, large scale implementation of a network delivery system for digital video content came from Apple’s iTunes. iTunes and AppleTV together form a cache and store content delivery network with complex Digital Rights Management (DRM) allowing for a simply and traditionally styled interface to television like content delivered over the Internet. Because of its cache and store architecture iTunes is able to function with very high definition video even over slower and less reliable network connections. The iTunes licensing team has secured a large volume of current television shows and movies that can be purchased through iTunes and watched on a computer, on a media center or on the AppleTV. The system is straightforward for most consumers and works very well. And the quality of the content generally meets or exceeds the alternatives of broadcast HDTV or DVD. Additionally the iTunes system blends alternative content from RSS/Atom feeds seamlessly into the picture allowing The Jet Set Show or Channel Frederator programs to appear as any other “television” content. Even YouTube can be viewed through the system. For consumers used to the high costs of cable and the unavailability of broadcast signals iTunes and AppleTV is a high quality, low cost competitor to traditional television with the advantage of having no commercials and all content being available on demand.

Netflix has recently entered the arena with their own disruptive service. Netflix’s primary business is as a movie rental alternative whereby movie renters can sign up for a monthly rental service and have DVDs or, more recently, HD-DVD and BlueRay Discs, delivered to them by post. The cost is extremely low and the ease of use and vast selection makes it very easy to choose over traditional rental services. Over the past few years Netflix has become very popular especially with the serious cinema market.  The new service from Netflix is the ability to view movies over the Internet via a streaming video service.  This service is included with all of the normal movie rental pricing plans making it “free” for their current user base to test and try.  This service, for people with moderate quality Internet connections, provides instant access to a massive, and constantly growing, library of “on demand” movies, documentaries and television programs.  For only a tiny fraction of the normal cost of cable service one can subscribe to Netflix’s unlimited download service and get unlimited, commercial free on-demand content.  The system is new but massively disruptive.

What is truly amazing about these two systems and their competitive counterparts like Amazon Unboxed is that they are not competing with the content of current media but only competing with the content delivery system.  By switching from traditional television and movie rentals to these services one will, under the vast majority of circumstances, save money,  increase easy of use after initial learning curve, remove commercials, remove reliance on “schedules” or “hours of business”, reduce necessary planning, increase selection, increase quality and remove expensive and incompatible devices which are currently popular to “mimick” these types of services such as DVRs.

What we are seeing now is an adaptation allowing people to continue to use the content that they are used to while receiving it through modern methods.  These new distribution systems will, in all likelihood,  prove to be ideal conduits for new types of content that can be delivered just as easily as traditional content.  The end of traditional television is here.  No longer is television just a legacy technology delivering a unique form of commercial entertainment and content that was not yet available through modern means – now it is simply legacy.

]]>
https://sheepguardingllama.com/2008/02/netflix-appletv-and-the-end-of-television/feed/ 0
Non-Measurable Organizational Value https://sheepguardingllama.com/2008/02/non-measurable-organizational-value/ https://sheepguardingllama.com/2008/02/non-measurable-organizational-value/#respond Wed, 06 Feb 2008 02:08:22 +0000 http://www.sheepguardingllama.com/?p=2247 Continue reading "Non-Measurable Organizational Value"

]]>
In addition to the concept of the Measurable Organizational Value or MOV metric that I discussed several days ago, I believe that we need to look at its contrary “metric” – Non-Measurable Organizational Value or NMOV.

Non-Measurable Organizational Value is the concept of corporate benefit being derived from a project that is inherently non-measurable. To some degree all value is difficult if not impossible to measure accurately except in extreme circumstance. Organizations are complex entities and any project or process is just one of many projects or processes that all contribute in some way to value.

Let us look at a quick example of a difficult to measure project. Project X at ABC Corporation (they give all of their projects letter-names for some reason) is a project to add four important new features to one of the company’s software products. The project runs its course and is estimated to have cost the company almost exactly $100K to complete. The new features are now available in the product which is available to existing customers as a free upgrade and are included in the shipping product to all new customers. Now we must determine the Organizational Value or OV of these new features.

These new features are designed to derive their organizational value from several planned sources. The first source is through “feature marketing“. In this capacity we are left attempting to derive, generally, through collected sales data both before and after project completion what the change in sales is because of the new, added feature. This has to be balanced against any changes in sales or marketing, changing market pressures, increased product maturity, changes in competing products, etc. Even asking customers to voluntarily report on this data can be very misleading in the best of circumstances.

The second source of derived organizational value is through customer retention. This is far more difficult to calculate than the somewhat ephemeral marketing and sales aspect of the first source. To some degree you can calculate the number or percentage of customers who decide to go through the upgrade process if they are downloading the update from ABC’s web site. But you are still left estimating what percentage of customers did the upgrade simply because they felt that they should use the latest version, who wanted to get the features but would not have switched products to get them and which customers would actually have taken the effort to switch to another product to acquire those features.

This approach does not consider the market affects that ABC’s new features have had on its competitors. In some cases by supplying in-demand new features ABC may have driven the market forward forcing competitors to include those same features, but it may also have done research and development that now its competitors can copy at lower cost. Or perhaps it has implemented features so costly or specialized in nature that competing products avoid implementing those same features simply because they are already available on the market. Estimating ABC’s market effect is very difficult if not impossible.

The third source of revenue is difficult to calculate as well. This third source is through market preparation. By this I mean that two of these four features implemented by ABC Company are building blocks that are expected to lead into another project that will provide a number of really incredible and difficult to produce features in another year. A certain amount of planning and framework design was done during this project in order to prepare for the following project. A few hundred hours of manpower were put into this robust framework that would have been excessive for just the current features but will allow future features to be added in more easily. This technology investment will not see dividends until other projects, seeking their own organizational value calculations, are layered on top of it.

In this example of ABC’s Project X we see that this project was based mostly upon Non-Measurable Organizational Value. Some of these value sources could be examined and an estimate of OV could be extracted. But this is, at best, guesstimation and the organization would need a very carefully managed process to keep this type of guesstimating consistent and fair between projects. But there are other NMOV concepts that we should also address.

One type of NMOV is employee morale and the related “development velocity.” Employees, especially technical workers, are highly affected by their work environments and generally desire to be allowed to produce good products. Developers and analysts will often state that the value and quality of their work is a driving factor in their level of job and career satisfaction. A company with high morale, satisfaction and happiness will generally get better products and it will get them out the door faster. Teams are more likely to gel. Sick time goes down. Communications go up. A project will not often be undertaken purely out of concern for employee morale but a feature, a methodology, a technology or a technique might be chosen for just this reason. A project might be written in Java instead of C++ just to keep a team more interested in their work, for example.

Now we can attempt to measure employee morale in several ways but we have two main arguments preventing us from doing any meaningful measurement. The first is that there is no good means of measuring an increase in morale based on project decisions. Finding single points of morale boost or morale decrease may be possible but determining overall morale deltas caused by a specific project separate from the overall organization culture is simply unreasonable. The second argument, against being able to measure the OV, is that even once morale change is estimated it is then not possible to determine the degree of effect that this will have against the short term or long term organizational bottom line. And, of course, we must consider the morale bolstering affect of an organization being willing to sponsor projects for the purpose of or, at least, with consideration towards employee happiness.

But this is not to say that employee morale has no value. Certainly it does. I believe that it is an important factor in organizational health. A very important factor indeed.

Any project looking to determine its own value cannot truly do so without considering the implications of non-measurable organizational value. But to what extent should these factors be considered? To this, I believe, there is no simply answer. In an extremely large enterprise where many years of metrics could be collected and careful research into marketing, market pressures, competition, morale, productive velocity and more can be collected and calculated, over a very large system of hundreds of thousands of employees, I believe that we should be able to see reasonably consistent trends that will lend themselves to reducing NMOV factors into estimable organizational value. However even in this circumstance these calculations will have to be done at a very high level of abstraction with a significant amount of organizational research ongoing at all times. And results, unlike OV from specific projects, are most likely to only have a good degree of confidence when undertaken as organizational directives and initiatives and not on a project by project basis.

Large enterprise organizations, those with tens or hundreds of thousands of employees, are often caught by their own bulk and momentum and find that many NMOV factors do not exist for them in the same manner than they exist for small companies. At the one extreme a behemoth manufacturing company with a half million employees will find enacting organization wide morale or velocity initiatives to be difficult to implement, difficult to measure and that the effects are minimal as the momentum of the firm keeps these changes from trickling down to the majority of the workforce without being watered down by the established corporate hierarchy until even the attempt at change is seen as a waste of effort. Large enterprises often settle into a relatively stable state of morale with only truly significant events having a serious or long lasting effect.

Small companies, especially those under five hundred employees, can find a much higher value in NMOV, in my opinion. Small companies gain some of their greatest advantages through the leverage of NMOV. In a tiny company of twenty people one great project could invigorate the entire company for potentially years. Culture and attitudes can change almost instantly and truly great velocities can be achieved and maintained. Smaller companies need to be more attuned to the NMOV factor than their large counterparts. It can be both an advantage and a disadvantage. Small companies do not have the luxury of the employee scale buffer (read: organizational momentum) to keep morale busting events from dragging a company down quickly.

Some larger companies have become famous for managing to keep morale and culture at the forefront of their priority list with exceptional results. Notably Microsoft and Google have become poster-children for great corporate culture even in large firms. And both have become well known for consistently delivering strong technical products, moving the market forward, breaking new ground and keeping employees very happy. Both are also known for investing heavily in research and development which are often incorrectly thought to be NMOV activities but, over time, R&D activities have a reasonable level of estimable value.

NMOV cannot be the sole driver of organizational projects but it should not be discounted. NMOV should be considered even if, at the very least, only from the aspect of attempting to mitigate negative NMOV behaviour or project choices. To some extent, though, I believe that NMOV should be estimated through soft calculations and guesstimation to be accounted for within organizational project portfolio planning as well as corporate culture fostering activities.

Perhaps NMOV, given the potential value that it can add to an organization, should be considered not to be non-measurable but to be immeasurable.

]]>
https://sheepguardingllama.com/2008/02/non-measurable-organizational-value/feed/ 0
Feature and Specification Marketing https://sheepguardingllama.com/2008/02/feature-and-specification-marketing/ https://sheepguardingllama.com/2008/02/feature-and-specification-marketing/#comments Tue, 05 Feb 2008 21:03:07 +0000 http://www.sheepguardingllama.com/?p=2248 Continue reading "Feature and Specification Marketing"

]]>
Feature Marketing: Using features as a component of a marketing solution.

Specification Marketing: Using specifications as a component of a marketing solution.

Feature Marketing is traditionally seen in large lists of features of a product.  This can be done in an attempt to overwhelm a consumer with so many facts and figures that they simply assume that a product has whatever features they are looking for or to make them believe that one product is superior to another simply through a volume of features.

Feature Marketing can also be done more legitimately through the building-in of critical features for which consumers are actually searching.  A good example of this is the pivot table feature in Microsoft Excel and many competing spreadsheet applications.  Pivot tables are a key feature that many consumers want or need and will search for specifically when choosing a spreadsheet.

Feature Marketing could generally be used synonymously with “specification marketing” however the two should be recognized as being somewhat distinct.  A specification is an inherent property of a product while a feature is a non-inherent property.  For example, cupholders in cars are features.  A car can quite legitimately be manufactured without cupholders.  Cupholders are marketing features.

A good example of Specification Marketing is modern monitors and televisions providing native 1080p resolution.  This is a specification for which a large percentage of consumers are actively searching and a product of this type that neglects to include this specification is potentially crippling itself in the market or, at the very least, in the educated consumer market.  Resolution is a specification of a monitor or television and not a feature since no monitor can be made without a resolution.

References:

The Death of Feature and Function Marketing 

]]>
https://sheepguardingllama.com/2008/02/feature-and-specification-marketing/feed/ 1
Overview of Measurable Organizational Value (MOV) https://sheepguardingllama.com/2008/01/overview-of-measureable-organizational-value-mov/ https://sheepguardingllama.com/2008/01/overview-of-measureable-organizational-value-mov/#comments Tue, 22 Jan 2008 12:14:11 +0000 http://www.sheepguardingllama.com/?p=2227 Continue reading "Overview of Measurable Organizational Value (MOV)"

]]>
Measurable Organizational Value or MOV is a term coined by Jack Marchewka as an alternative tool to the more popular Return on Investment or ROI concept which has become a buzzword within the industry over the last ten years and has existed for many more. Marchewka defines measurable organizational value as being “the project’s overall goal and measure of success.”

Marchewka further breaks down the term MOV and says that it must implicitly include the following: be measurable, provide value to the organization, be agreed upon and be verifiable. Let’s look at each of these.

Measurability is obvious yet extremely difficult. Many benefits of IT projects are “soft” and inherently unmeasurable. For example, a project that makes employees more happy cannot be measured as it is never possible to determine how much happiness is the result of any one project and how much other organizational efficiencies can be attributed to employee happiness and morale. And yet we almost all agree that happy employees work better, are more loyal, cost less and interact better with each other.

The idea behind measurability is that no project decisions should be made without a consideration towards how they will affect the project’s MOV. If a new feature is being considered, for example, then that feature should be compared against the MOV. If the feature will not increase the MOV then it should not be included. A relatively straightforward concept, but it basically states that only measurable value should be considered. This is not always intuitive.

Project Management by MOV should provide value to the organization. This is the underpinning of the MOV concept and is analogous to the concept of ROI. ROI, however, is a measurement of the difference between expenditure and the expect value to the organization. MOV does not seem to take into account the cost of its own provisioning and only looks and the measurable business value after project completion while ROI takes into account the cost of providing the MOV as well as having the potential to consider the non-measurable organizational value which may be the driving force or a project.

A project MOV must be agreed upon. Marchewka states that all project stakeholders should agree upon the MOV of a project before the project starts. This requirement includes making business stakeholders as well as technology stakeholders, such as analysts and developers, agree to the MOV before a project begins as a later measurement of project success. This is a difficult task as it is in one group of stakeholders’ interest to make the MOV high while it is in the interest of the technology stakeholders to make it low. This is especially difficult as it benefits the business side to trick or take advantage of the lack of business acumen from the technology side and requires the technologist to allow themselves to be judges of something that they neither understand nor ultimately control.

Verifiability of the MOV is key. Since the project’s MOV is measurable by definition it must then be verifiable. After the project has been completed the MOV is to be verified to determine if the project was successful or if it was not. However, Marchewka does not seem to address the issues of ongoing organizational value. A typical IT project will deliver negative value up front and will increase in value over time and then, eventually, decrease in value until it is replaced. True MOV would not be verifiable until the end of its lifespan.

For example, since code from IBM’s System/360 project from 1964 is still widely in use one would assume that IBM has not yet been able to determine the final MOV for that project. If their initial estimates had been extremely accurate and had taken into account a lifespan that might even top fifty years then the MOV would not yet be able to be verified as having quite reached its full value. Therefore a useful MOV is one that takes into account an acceptable lifespan of measurement but this introduces many more factors.

]]>
https://sheepguardingllama.com/2008/01/overview-of-measureable-organizational-value-mov/feed/ 6
The Fallacy of Bonuses https://sheepguardingllama.com/2007/10/the-fallacy-of-bonuses/ https://sheepguardingllama.com/2007/10/the-fallacy-of-bonuses/#comments Wed, 17 Oct 2007 19:20:05 +0000 http://www.sheepguardingllama.com/?p=2101 Continue reading "The Fallacy of Bonuses"

]]>
In business, people love to talk about bonuses.  “Bonuses are an incentive for employees to work harder.” you hear people say.  But are they?  Let’s take a critical look at this “motivational” practice.

Thinking that bonuses are a path to employee efficiency shows a great lack of corporate empathy by management for the workers.  It represents a disconnect between those who organize the work and those who actually perform it.

Management often thinks that since bonuses are “bonus” money – above and beyond salary – that employees must be delighted to receive them.  But what they forget is that the non-management employees spend a lot of time thinking about their monetary compensation and they are more than aware that that bonus is simply salary that was withheld until the end of the year.  It isn’t “extra” money but salary paid in an irregular fashion – it is an integral part of their total compensation package.

What employees all know, and management seems to not understand, is that employees are never empowered enough to make decisions or to work hard enough to significantly impact a company’s bottom line in such a way as to noticeably change their own bonus.  This seems so obvious that one has to wonder how any management student could ever get away without knowing this instinctively.

Employees seldom get to work in any environment that they deem to be that in the best interest of their own efficiency.  How many more employees would choose to work from home or from the corner office?  Employees don’t determine which projects they work on or who their team members will be.  The management decides the “big scope” items.  Employees are tasked with more specific jobs.  A factory line worker at Ford isn’t allowed to exclaim that “Automotive profit margins are down so I am going to work on making video games instead of assembling fuel injectors.”  Those decisions are left to others.  Non-management employees are not in a position to make decisions of a scale that can affect their bonuses in any meaningful way.

Even in companies that make concerted and sincere efforts to give their employees real decision making power the employees are still shackled by the scale of the company.  Even a small corporation of one thousand employees shows that the actions of any one employee can only have a minimal personal impact.

The reason that people choose to work for large companies is because of the mediating factor that size allows for.  A large company can make large profits or lose large sums of money from year to year but still will have the resources to pay its employees the same from year to year.  Employees are largely protected from market variations and upheavals.  They take little risk and they get compensated very conservatively.   This is the value that large corporations bring to the table when shopping for employees.  Bonuses, when used by large companies, show that companies have forgotten this value and think that their employees will be happier taking risks.

Many people are happy to take risks.  These people are generally entrepreneurs.  Small companies and start-ups have very small staffs where individuals are able to impact the profitability of the company and the individuals can know each other and decide together to succeed or fail.  They are a single team that makes choices together.  They take huge risks but because they do they can also reap great rewards.  This is a calculated risk that makes sense.

When a company pays a good bonus the employees will generally be happy.  Some will think about the fact that this is salary that they earned throughout the year and was withheld so that the company could use it to make more money during the year but many will not.  But when a company does not pay a good bonus – then employees immediately think about how they were not in a position to impact the company’s profits and they blame management for having failed to do its job and then taking the losses out of the powerless employees’ wages.  And this is exactly what has happened.

Bonuses are a means by which managers and companies hedge against losses.  Instead of paying employees what they are worth, they tie the employee’s wages to the company’s performance so that on good years employees get paid market value (one hopes) and on bad years the employees suffer so that the company doesn’t have to spend as much on the labor that it has already consumed.

Any employee who has ever been paid a small bonus is acutely aware of this situation.  I have personally never worked for a company that has paid bonuses and yet just hearing about a company which works in this way makes me shudder and think of middle managers trying to lessen the damage caused by their own incompetence.

Bonuses have no real upside to employees and very little to businesses themselves.  The initial benefit to a company is obvious – reduced operational expenses during times when the company performs poorly.  But in the long term this causes the company to compensate poorly and lose staff capable of moving around easily in the marketplace.  Those who stay are disgruntled and, most likely, under-qualified.

Never tie one person’s motivation to another person’s performance.

]]>
https://sheepguardingllama.com/2007/10/the-fallacy-of-bonuses/feed/ 1
The Social Web and the “Reconnection Generation” https://sheepguardingllama.com/2007/09/the-social-web-and-the-reconnection-generation/ https://sheepguardingllama.com/2007/09/the-social-web-and-the-reconnection-generation/#comments Tue, 18 Sep 2007 14:38:41 +0000 http://www.sheepguardingllama.com/?p=2059 Continue reading "The Social Web and the “Reconnection Generation”"

]]>
Each generation faces unique challenges and a varying landscape of social interaction. In the last several hundred years we have seen these changes happen at a pace hitherto unknown in human history.

During the nineteenth century reliable high speed steam transportation gave people, for the first time, the ability to send mail and parcels long distances in a relatively short period of time. This gave the children of the era a future with the potential to stay connected to long distance friends and relatives as they had never done before. The modern communications revolution had begun.

When the telegraph was invented it again revolutionized communications giving family and friends the ability to communicate news in hours rather than days and over ever further distances. The telephone made a leap most profound by not only speeding communications again but by also bringing the means for communication into peoples’ very homes. The telephone also gave us the ability to hear another person’s voice giving the media a sense of tactility making intensely personal a world that had previously known only mechanically reproduced messages.

In the personal computer era (the 1970s through the mid to late 1990s) and in the Internet era (since the mid to late 1990s) we have again made leaps in the ease, cost and availability of communications. One of the most important changes that has occurred during this era is the move from temporary communications channels to persistent communications channels.

What is a persistent communications channel? Portable telephone numbers, email, instant messaging, home pages, blogs, etc. What makes these persistent channels is that once someone learns the means by which someone else may be contacted that person can be contacted by that means forever. SGL is my blog and my personal homepage (for those who remember that term.) Anyone wishing to find me again need only remember the URL for this blog and they can check up on my personal news or leave a message for me. No other information would ever be necessary to relocate me should someone lose touch with me at some point in the future. And to make things even easier than remembering the URL – you can always search for it.

In fact, even remembering how to reach me is completely unnecessary. A simple search on Google or Yahoo! for “scott alan miller” will provide you more hits related to me than you will know what to do with. This doesn’t work for everyone but for many people it does.

More than just websites, though, are now persistent. Email addresses, for example, last practically forever. I first opened a Yahoo! mail account around 1997 and still use it today. I will have that email address for the rest of my life. My SGL mail will be with me forever. Either of those addresses will always reach me whether it is today, in five years or in fifty years. Anyone who keeps my email address can track me down in the future. Not like the postal addresses of the past that were accurate only so long as you never moved.

This persistence becomes ever more important as its mere existence fuels a more and more rapidly relocating populace. People move all of the time now. Fifty years ago you were relatively likely to grow up and live in or near the town that you were born in. Today to do so would be almost surprising.

Until just a few years ago, as recently as 1999, the move that most students made between high school and university was so dramatic that it forced a severing of most ties between school life and college life. Any student moving out of their parents’ home would need a new phone number and would have a new mailing address. Only those people who actively worked to maintain ties would be able to maintain them. Once contact was lost it was difficult if not impossible to reestablish.

Today the idea that changing schools, cities, states or even countries would negate two people’s ability to maintain regular contact is completely foreign to the “digital natives” currently residing in our education  and social systems. In fact, social interactions have moved so completely online that even I, as a “digital immigrant”, can barely tell the difference between people that I see in person on a regular basis and those that I only communicate with online. The transparency of online communications to digital natives is so significant that the impact of physical social separation must be nearing nominal in many cases. This trend will continue.

Even today we begin to see this trend turning into a solidified reality.  People are beginning to maintain communications and ties with each other even when one or the other or both are physically relocating, traveling, switched jobs, school, etc.  What makes this phenomenon truly exciting is how often these transitions occur with so little interruption to the communications process that often the other party is unaware that a potentially disruptive event may be taking place.

Today we see a new trend in communications emerging – the social web. The social web is giving Internet denizens the ability to interact in new ways with people that they already know.  More importantly, however, it is providing a framework for maintaining connections with people from whom that they would otherwise naturally drift apart. The real innovation in the social web is in its amazing ability to expand our permanent connection base.

As an example, in the era when email was the most robust and prolific social interaction tool available online, most people would only maintain email addresses for a select number of personal contacts. Maybe twenty or thirty people on average. Maybe many fewer. These addresses would be kept up to date and emails would be occasionally exchanged. This is roughly analogous to the Rolodex of the 1980s. Only so many contacts could be maintained and generally only for people who would be contacted with enough regularity to be able to maintain updated and useful information. Once contact was lost the contact information would be discarded and total number of contacts would be reduced.

In the era of the social web connections are persistent. By adding someone to your list of friends in many modern social networking applications you have a connection to their email, blog, news, pictures and, most importantly, to their own address book. By finding one person online that you know from school, work, the community or simply through shared interest you grow your social network. Social networks deepen the level to which communications become valuable. Instead of maintaining the ability to easily contact a small number of select friends and family suddenly you are able to contact scores or hundreds of people that you know but may not know well enough or care to contact often enough to maintain in your traditional personal address book.

For digital natives the concepts of the social web are obvious. Of course, digital natives maintain persistent relationships throughout their lives. Severing ties because of “inconvenience” doesn’t exist to them conceptually. Friends are friends forever. Acquaintances are worth keeping tabs on. You never know when you will discover a friend from high school at a branch office of your company or that an acquaintance from the gym is in your online university class.

In the past it was accepted that physical relocations meant severing all by the strongest of ties and often those too would be broken simply through neglect, accident or circumstance. I have a friend who has managed to lose her telephone numbers, email and mailing addresses – all at the same time – on more than one occasion. Because she does not use the social web or even a blog where she can post her new contact information she is continuously cutting all ties to her former life and starting over with the few people that she manages to contact again. This used to be standard. But no more.

I like to refer to the current generation and future generations of digital natives as the “connected generations.” The society that they build will be inherently connected to one another. They will see themselves as a single culture rather than a lose collection of individuals held together by location and circumstance. They have strong relationship bonds and many of them. They maintain a social memory far greater than we who come before them can imagine.

Similarly I look at the previous generations as the “disconnected generations.” Their ties to one another are tenuous at best. Their social circles are small. Their relationship bonds are weak. They are islands in the stream of life – isolated from each other.

My generation falls between these two. The disconnected generations are, by and large, uninterested in becoming connected. The ties that have been severed were severed long ago – too long for the bonds to be reforged. Their view of the world does not include the need for a large social cloud persisting around them. They don’t actively refuse social connections but, instead, do not even realize the possibility or the potential for such an extent of interaction. It is completely foreign to them.

Unlike the previous generations, my generation managed to grab onto the concepts of the persistent social network just as our ties were beginning to fail. We were brought up in a world where tie severing was as common as it ever was. But a few technologies and trends began to appear that slowed this process as we became older. Cell phones began to be prevalent just in time to provide many people with overlapping phone numbers. As people began to track multiple phone numbers they found that they could keep connected more easily through traditional tie severing transitions. Long distance phone calls became nominally expensive or free. Email appeared. And the persistent connection revolution was started.

Because we grew up cutting ties throughout our childhoods, but started to resist as young adults, my generation has the unique position of understanding and desiring the persistent social structure that we see forming in the generation that follows us. However, we have already lost contact with a large number of our former connections. Hence we have become the “reconnection generation.” No other large social group is likely to ever again attempt to mount a large scale siege on our own social structure in this way. We are attempting, in a very short time, to rebuilt the ties that were lost, in many cases, many years ago.

The social web has become the tool allowing for this social revolution. We are reconnecting, rediscovering and reinventing ourselves. Believing our pasts to be lost we are now finding that the ghosts are not so often ghosts as we had believed. Finding one childhood friend could lead to the discovery of several more. An old classmate becomes a hub for new connections. Websites like Classmates, MySpace and FaceBook help to create reconnection opportunities that we would never have imagined before.

This generation may be, of all generations, the one that understands the importance of the social web – both in the context of the set of Internet protocols as well as the mesh of social connections – and appreciates the value that it brings to society and humanity. We know the pain of separation and we know the joys of friendships reborn. We have had time apart and now we can reminisce and reunite. We have been alone but now we are together. Solidarity is ours!

]]>
https://sheepguardingllama.com/2007/09/the-social-web-and-the-reconnection-generation/feed/ 1
What is a Game? https://sheepguardingllama.com/2007/07/what-is-a-game/ https://sheepguardingllama.com/2007/07/what-is-a-game/#comments Tue, 10 Jul 2007 18:47:00 +0000 http://www.sheepguardingllama.com/?p=1967 Continue reading "What is a Game?"

]]>
Everyone has their own definition of what constitutes a “game” and, as one would suspect, philosophers have put forth their own theories as to what is and is not a game. I have long felt that there is a certain aspect of gaming that is often misidentified and included in many activities that are called games but I believe are not actually games.

I believe that in addition to whatever other definition one uses to define a game that the needed additional rule is this: To be a game an activity must have an outcome that can be directly affected by the player. That is to say, that the player doesn’t just choose to “play” but once begins to “play” can actually choose to change the final outcome. This can be defined differently by stating that in order to be a true game an activity must require that logic and reasoning be applied.

To further refine this definition it should also be included that the game should have within its affectable components enough complexity to disallow for any “perfect” play – at least by humans.

A fuzzy definition, I realize, but a useful one all the same. Chess, for example, is clearly a true game as the moves made by the players directly affect the outcome – the fundamentals of skill. It is further a true game because no human has memorized the “perfect” chess game or set of moves that can be duplicated blindly to guarantee a win or a “best outcome”.

Given that we know that Chess, Go, Age of Empires and similar games are, in fact, games let’s look at some examples of what fails to be a game – at least to me.

The Automatic Win/Lose: This artificial non-game is very simple. One of more players choose to play a non-game. Each player is then informed that they have either won or lost. This fails to be a true game because the players don’t really play – they simply learn of their results. This may sound like a silly example until you realize that one of the most popular game-like activities is in lotto and lotto-like gambling (slot machines, for example) where you pay to “play” and you are simply informed that you have either won or loss. There is some enticement in the fact that there is money to be won or lost but that is outside the bounds of defining this activity as a game. Many lotto dealers promote lottos that are automatic win/lose scenarios as games when the player has no ability to effect the outcome.

Many people may actually enjoy activities that require no though or effort and simply result in a win or a loss. It is not uncommon to find people who actually gravitate towards this type of activity but to call that activity a game would render the term game meaningless.

“Hello little children, would you like to play a game?”

“Yes, please!”

“Okay, you lose. Wasn’t that fun? Would you like to play again?”

As ridiculous as this seems the Internet and anonymous gaming has begun to show that players will happily cheat on a game whose only goals are to win or lose within the game itself. By cheating they are not actually playing the game but are simply attempting to get the system to provide them with a “You win” outcome. Prior to anonymous gaming opportunities this was generally assumed to be caused by a need to show superiority to others but now we have a very good opportunity to witness that the only real desired outcome is not actually winning at the game but being told that you have won. It is clear that “playing a game” is not a universally desired activity. The automatic win/lose is more desired than it sounds likely to be.

The automatic win/lose scenario can be applied to more complex systems that give the appearance of a game on the surface. A perfect example of this is the children’s board game snakes and ladders which originated in Victorian England. (It is commonly known under the brand name Chutes and Ladders in the United States.) This game has many of the makings of a true game to give the appearance as much as possible that the activity is actually a game. But it is not. The player has no affect on the outcome of the game (without resorting to cheating which breaks “the game”, refusing to play, stopping play, etc. which are often given as excuses for why it would still be a game.)

Snakes and Ladders (or Candyland or any other of a myriad of similar activities) serves as nothing more than a fancy covering over the automatic win/lose scenario by giving the impression of forward profess, introducing a built-in random element, having an official “name” and a board on which to play. It even has a set of rules. But in the end the players never make a single decision. The game simply starts, the rules are followed, no decision is ever made and a winner is announced. The game could be reduced to several players simply rolling a die and the highest (or lowest or closest to “3”) numbers wins. Activities such as Snakes and Ladders are simply game-like illusions designed to provide positive feedback to players incapable of winning a game where skill is involved and to teach game-like rule following and constructs to children too young to participate in real games.

The next category of activities crosses into a foggier territory – that of activities that have a clear set of “best practices” that guarantee or nearly guarantee the best possible outcome. The best example of this would be in the game of Blackjack or 21. The rules of the game are simple and the player can directly affect the outcome. However, there is no true allowance for creativity or strategic thinking in Blackjack. There is a well known and well defined basic strategy that provides a clear “beat outcome” over any large number of games. Any player who does not follow this strategy will eventually lose to a player who follows this strategy and the strategy is simple and can be learned by almost anyone in just a few minutes. Once this has been learnt even the most novice player on their first game is equal to the most seasoned player and the game is reduced to an automatic win/lose. On any given hand of Blackjack a random play diverging from the accepted standard best practice might yield a better outcome but this is an anomaly and over the long course of play will not continue to be a winning practice. It is a more complex illusion of being able to positively affect the activity’s outcome.

A slightly more complex example of this same phenomena is the popular board game Monopoly. In Monopoly the player has the ability to make many choices throughout the course of the game but, once again, there is a basic strategy that, once applied, is the best possible chance of winning. Beyond holding to this simple strategy the game, once again, reduces to nothing more than an automatic win/lose scenario. Any divergence from the accepted strategy of Monopoly is, basically, voluntarily losing or lowering the chance of winning. Intentional losing is not a part of accepted gameplay in normal gaming situations – it is the same as not playing. A player intentionally throwing a game of chess is doing so to create the illusion of a game while actually providing an automatic win/lose.

Most activities that people traditionally identify with games, in my opinion, outside of the very traditional games such as chess, draughts, go, etc. generally boil down to a simple situation as simple as a die roll determining win/lose outcomes while providing players with the impression that they have worked hard, thought carefully and managed to outperform their opponents. Most people are not good at games and this randomization with skill-less winning situation provides a sense of accomplishment when no work, skill or thought has been applied.

Perhaps the widespread popularity of automatic win/lose or known best approach games of chance is one of the best examples of modern society working very hard, even subconsciously, to reward mediocrity.  We want everyone to feel that they can win a game even if it is just an illusion.

You may wonder why I am so adamant about what constitutes a game.  The answer is simple.  I never want to spend several hours of my life “playing a game” that is nothing more than a random chance of winning or losing.  Where is the fun in that?  There is no challenge (not there is a “little challenge, literally there is none at all), there is no skill, there is no “trying hard” or careful strategy.  I have no concept of how an activity which involves no input whatsoever from its participants can ever compete with actually doing nothing or, better yet, taking part in an interactive activity such as a game.  Activities such as this are one of the ultimate wastes of time known to man – few activities can utilize so much time while engaging us so little.  It would be better to take a nap because at least then you have rested.  Additionally, activities such as this are designed to, commonly, produce one winner and multiple losers.  Not only does one person seem foolish for feeling that they have accomplished something by being happy to be told that they won but several people have to feel as thought that have failed because they have “lost” even though the activity is nothing more than random chance.  Overall, it is designed to produce bad feelings while accomplishing nothing.

]]>
https://sheepguardingllama.com/2007/07/what-is-a-game/feed/ 1
Video Games as Literature https://sheepguardingllama.com/2007/07/video-games-as-literature/ https://sheepguardingllama.com/2007/07/video-games-as-literature/#respond Mon, 09 Jul 2007 22:00:00 +0000 http://www.sheepguardingllama.com/?p=1964 Continue reading "Video Games as Literature"

]]>
At the turn of the century (that is, in 1899 – the LAST century) I think that few people would have guessed that photography would become a significant form of art or that recorded music would mostly displace performance music as the core of the musical arts nor would many believe that future forms of media such as television and cinema would become part of the core literary and dramatic traditions rivaling books and performance theatre for their importance to the collective consciousness of society. Children today can no more survive without being able to reference Star Wars IV: A New Hope, The Sound of Music or Schindler’s List than “1984”, “Watership Down” or “To Kill a Mockingbird”. Movies and television have entered the literary mainstream and are here to stay.

After a century of passive media and entertainment effecting our views and opinions of what constitutes literature today we are beginning to see the emergence of another literary form – the video game. Unlike books, movies, theatre or television video games are not passive but are active literature – literature in which the reader affects the story whether simply by moving it along or by changing the potential outcome of the story. I am not suggesting that society is ready to accept video games as significant media but this change will come and it will come quickly.

Different genres of video games will, of course, be viewed differently. Adventure and Console-style Role Playing Games already often cross the line from a pure “game” into the solid realm of interactive literature. Storylines have been growing and becoming more and more involved since the late 1970s beginning seriously with text adventure and later with graphical adventure games such as Infocom’s Zork and Sierra’s King’s Quest. Games began to tell stories and included engaging characters and situations. Video games rapidly began to take on the role of fiction with the addition of “reader” interaction. No longer is the reader simply an observer but a participant in the telling of the story.

Adventure games, whether text, graphical or illustrated with games such as Arthur: The Quest for Excalibur, clearly lead the way in creating a position of literature within the video game arena while the bulk of video gaming leaned towards the trivial action oriented games popular on consoles such as the Atari 2600 or the Nintendo Entertainment System. Much of this would change, however, with the release of Dragon Quest (Dragon Warrior in the United States) and the launching of the “Console RPG” or “Japanese RPG” genre.

Unlike traditional Role Playing Games, or RPGs, which have translated poorly to the video game format a console RPG is a more linear style of game that is closer to an Adventure game but incorporating some traditional RPG elements. Console RPGs lend themselves very well to deep storytelling and complex character development within the video game framework. Console RPGs may, in fact, surpass earlier Adventure style games in their ability to develop characters and provide deep story telling.

Recently the more traditional RPG genre (that is, non-linear RPG style play rather than linear or near-linear Console RPG style) has begun to benefit from the ever increasing computational powers of the computer or game console as well as the ever expanding budgets of the gaming industry to begin to move into the “video game as literature” category. The principle example of this within the RPG genre at this time would be Elder Scrolls IV: Oblivion.

Console RPGs began in earnest in the late 1980s and throughout the 1990s began to explore complex stories and persistent worlds that stretched from game to game. Today video games in this genre will often stretch towards and even exceeding one hundred hours of game play providing an opportunity for a depth of story competing with that found in traditional literature while not treating the reader as a passive entity. This complex and deep story telling combined with active reader involvement is giving us today and is promising to provide in the future for a new literary tradition that is dramatically more significant to its readers.

Electronic Gaming Monthly, a major US video game industry publication, said this about Final Fantasy VII – the quintessential highly linear console role playing game – “Square’s game was … the first RPG to surpass, instead of copy, movie like storytelling”, and that, without it, “Aeris wouldn’t have died, and gamers wouldn’t have learned how to cry.” Final Fantasy VII was one of the first Console RPGs released on the Sony Playstation coming out in 1997 and defined the state of the art in video game storytelling during the era. More recently, and also from Square Enix the successor to Squaresoft who made Final Fantasy VII, on the Playstation 2 is Dragon Quest VIII which does a magnificent job of pulling the player into the game world and telling a very traditional fantasy adventure tale while doing so in an entirely new way that makes books seem almost obsolete.

Will video games of the future, perhaps in the next fifteen to twenty years, rival modern fiction literature in its ability to educate, to stir to action, to bring to tears, to move, to motivate, to thrill, to drive? Yes, I believe that it will. And I believe that video games will become an increasingly important part of our collection cultural experience and, as such, will be an important part of educational curriculum just as traditional literature is today. I do not expect and certainly hope that video games will not displace traditional media but see them taking their place within the pantheon of literary forms.

]]>
https://sheepguardingllama.com/2007/07/video-games-as-literature/feed/ 0
The End of Powered Telephones https://sheepguardingllama.com/2007/07/the-end-of-powered-telephones/ https://sheepguardingllama.com/2007/07/the-end-of-powered-telephones/#respond Mon, 09 Jul 2007 16:32:54 +0000 http://www.sheepguardingllama.com/?p=1963 Continue reading "The End of Powered Telephones"

]]>
I have long argued that the legacy telephone network was on its last leg.  It no longer serves any real purpose.  It is expensive both for end users and for the telephone companies.  The quality of phone calls, in my personal experience, has been inconsistent and only on par with VoIP.  I have heard many people make the argument that they are willing to pay the much higher fees for legacy, powered, copper based telephones because they provide the extra reliability of having their own power so that they continue to work even when the power goes out.  A valid argument, I guess.  This assumes that the power continues to be available for the phone which is often, but not always, the case when there is a power outage.  And it also assumes that you are willing to pay those high fees instead of getting a nice Uninterruptible Power Supply for your VoIP which is just a one time cost of less than $100.  A nice UPS could last through several days of having no power – long enough that you would want to have a generator anyway.

In this day and age when a huge percentage of the population uses only cell phones or VoIP phones or some combination thereof for their normal phone usage there is little argument for needing expensive legacy phones anymore.  However, the option will soon be gone as Verizon (and presumably other last mile common carriers shortly) has begun to dismantle its last mile copper network as it rolls out it fibre based FIOS service.  This is an obvious move as fibre has far less restrictions on it, delivers more services to the end users and saves money both in maintenance and in power consumption.  Fibre is far better for the overall economy and the environment as it consumes vastly less electricity to support – as long as shared infrastructure regulations keep common carriers with access to the fibre infrastructure from monopolizing the market.

The Internet is taking over.  The Public Shared Telephone Network (PSTN) is over.  For many of us, it is a distant milepost in the rearview mirror.  For some it is just an inevitable shift as the old copper system becomes too expensive to support.  But one way or another, legacy telephones are done.  The faster we move on the sooner we can deal with other challenges.

]]>
https://sheepguardingllama.com/2007/07/the-end-of-powered-telephones/feed/ 0
Proposed Change to the English Language: Godly https://sheepguardingllama.com/2007/07/proposed-change-to-the-english-language-godly/ https://sheepguardingllama.com/2007/07/proposed-change-to-the-english-language-godly/#respond Tue, 03 Jul 2007 17:05:22 +0000 http://www.sheepguardingllama.com/?p=1955 Continue reading "Proposed Change to the English Language: Godly"

]]>
It has recently occurred to me that the “correct” spelling of the word godly is not capitalized – ever.  There are a few different meanings of this word.  The first is “to be like a deity, of or pertaining to a god” and the second is “like unto God.”  The only real differentiation between these two meanings is in the capitalization of the definition and yet we have only one capitalization for the word godly itself and the implication of that spelling is towards the lesser used meaning of the word.

I propose that this is a flaw in the usage of the English language.  Thankfully our language is one that is flexible and malleable enough to be able to take on new meanings, allow for greater expression over time and to adapt as the need arises.  Throughout time English has arisen as a primary means for global communication because of these benefits where many “older” more rigid languages have fallen into disuse or have stagnated as overarching control of the language has kept them from adequately adapting to greater range of expression necessary in today’s highly advanced societal structure.

My proposition is that two words be formed: godly and Godly.  The uncapitalized form of the word will continue on to mean “of or like a deity, etc.” while the capitalized form can be used in any reference to the monotheistic form of the word.  This keeps literature from being unnecessarily ambiguous.  Take the following sentence as the dictionary would have us write today:

Lauren, being a godly woman, ran to work quickly.

Currently this sentence could mean that she was a good, Christian woman who fears God and follows in his commandments and loves her fellow man, etc.  It could be that the author simply wants to inform us that she is a “believer” who is faithful.  It is a statement about her character and her faith.  Her travel to the office could be incidental.

However, the same sentence could mean that she is actually a minor deity or, perhaps, a superhero (an actual example used in a dictionary entry that I checked.)  So maybe the implication is that she ran several miles in under a minute to work or that she tossed cars out of her way as she went.  Or maybe her job is dressing in spandex and using her super-stretchy arms to stop would-be jewel thieves.

Instead we can solve this issue by simply have these two different versions available to us when needed:

 Lauren, being a Godly woman, ran to work quickly.  (Religious meaning.)

Lauren, being a godly woman, ran to work quickly.  (Referring to superhuman x-ray vision.)

The point is, the word is pointlessly ambiguous.  But now I have pointed out the flaw, stated the need and shown how to use the word correctly.  Now we can move on.  Encourage your friends to use the new word.  Use it yourself.  With a simple, grassroots movement we can quickly correct this oversight in the language.

]]>
https://sheepguardingllama.com/2007/07/proposed-change-to-the-english-language-godly/feed/ 0
On Sentience https://sheepguardingllama.com/2007/05/on-sentience/ https://sheepguardingllama.com/2007/05/on-sentience/#respond Fri, 18 May 2007 22:31:34 +0000 http://www.sheepguardingllama.com/?p=1905 Continue reading "On Sentience"

]]>
“The question is not, ‘Can they reason?’ nor, ‘Can they talk?’ but, ‘Can they suffer?'” – Jeremy Bentham

Any discussion of the ethics of Artificial Intelligence is muddied by a social misunderstanding of the conceptual underpinnings of intelligence itself. Wikipedia defines Intelligence “a property of mind that encompasses many related mental abilities, such as the capacities to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn. Although intelligence is sometimes viewed quite broadly, psychologists typically regard the trait as distinct from creativity, personality, character, knowledge, or wisdom.” What is clearly lacking from intelligence are concepts such as self-awareness, feeling and desire. Intelligence itself refer to any of myriad computational processes. Common examples include playing chess, pattern matching, path determination, process optimization, etc.

Because creating decision system on computers is such a dedicated field of study a term has been applied to the entire domain that we know as “Artificial Intelligence” or AI. AI is not a study with the intent of making computers feel, become self aware, have desires or become human like in some other manner. AI is not related to robotics any more than metallurgy is related to robots. A child’s chess program utilizes AI to play chess, for example. A robot may or may not. Most robots today have no AI at all but use simple programs controlling movements.

In reality there have never been any serious ethical questions involving AI. AI is extremely straight-forward to anyone with a basic understanding of computers, computational mechanics and/or programming. AI simply refers to complex decision making algorithms that could be implemented through mechanical means or even on paper. The results of AI will be identical every time given the same set of inputs (there is a field of AI research that uses randomized input which is quite interesting but randomization is not sentience.)

The ethical questions arise when people begin to consider the issues involved in bestowing sentience upon a machine. It is artificial sentience (aka artificial consciousness) that science fiction often confuses with the real world study of AI. Wikipedia defines sentience as “possession of sensory organs, the ability to feel or perceive, not necessarily including the faculty of self-awareness. The possession of sapience is not a necessity.”

Once sentience comes into play we must begin to consider the question of when a mechanical creation moves from being a machine into being a life form. This will prove to be an extremely difficult challenge as sentience is nearly impossible to define and much more difficult to test for. Scientist and philosophers have long considered the puzzle of determining sentience but no clear answer has been found. I propose, however, that sentience will not and cannot happen through happenstance as many people outside of the computer science community often believe. Sentience is not a byproduct of “thinking quickly” but is a separate thing all together. For example, a computer in the future could easily possess “intelligence” far exceeding that of a human but the computer, regardless of the speed of its processors, the size of its memory or the efficiency and scope of its AI algorithms will not suddenly change into a different type of machine and become sentient.

Sentience does not refer, as we have seen, to “fast” or “advanced” AI but is a discrete concept that we have not yet fully defined. More importantly we have not yet discovered conceptually a means by which to recreate such a state programmatically. Perhaps computational logic as we know it today is unable to contain sentience and a more fundamental breakthrough will need to be made. I feel that it is extremely unlikely that such a breakthrough can be made before a more complete understanding of current, biological sentience can be comprehended and explained.

Should sentience become an achievable goal in the future which it very well may we will suddenly face a new range of ethical questions that are only beginning to be touched upon from areas such as genetic research, abortion rights and cloning. For the first time as a species humans would face the concept of “creating life” which is unmatched in its complexity from all other modern ethical life questions.

The first ethical challenge would be in setting a sentience threshold. Simply this can be defined as “when does life begin?” At point in the making of a sentient being does it become sentient? Obviously in most creation processes involving sentience a human-made machine will be “off” and non-sentient during its creation process and sentient once it is “turned on”. More practically we need to set a threshold that decides when sentience has been achieved. This measurement would obviously have ramifications outside the world of artificial sentience research as a sentience measurement would begin to categorize existing life as well. Few would argue that amoeba are non-sentient and that dogs are sentient but where between the two does the line fall?

Already we are beginning to see countries, especially in Europe, beginning to create laws governing sentient beings. In recent years penalties for torturous crimes against highly intelligent animals have been increased significantly in an attempt to recognize pain and suffering as being unethical when inflicted unnecessary upon sentient beings regardless of their humanity. Better definitions of sentience and the rights of sentient beings need to be defined. Humans often see specialism as a deciding factor between rights but this line may become far more blurry if artificial sentience succeeds as well as many hope that it will.

Many, if not most, interested in the advancement of artificial sentience are truly interested in the further prospect of artificial sapience. If artificial sapience is truly achieved and a clear set of rights are not in place for all sentient beings we risk horrifying levels of discrimination that could easily include disagreements over rights to life, liberty and property. But unlike non-human biological sentient beings currently existing we may be faced with a “species” of artificially sentient or sapient beings capable of comprehending discrimination and possibly capable of organizing and insisting upon those rights – most likely violently as the only example of sapient behaviour that they will have to mimic will be one that potentially did not honour their own “rights of person.”

Artificially sentient beings, capable of feeling pain, loss and comprehending their own persistence, must be treated as we would each other. Humanity cannot, in good consciousness, treat those that are different from us with a significantly reduced set of basic rights. We have, throughout our history, seen animals as humanity’s “playthings” there for our amusement, food and research but this practice alone could create a rift between us and an artificially sentient or sapient culture. A hierarchy of values by species – humans have more rights than dogs, dogs more than mice, mice more than snakes – will clearly not be seen favourably by a “species” that is capable of comprehending these shifts in value in the same way that we do. We have never faced the harsh reality of direct observation and interpretation of our actions and because of this much of our behaviour is likely to be questionable to an outside observer – especially one who can see our behaviour as forming the basis for discrimination against any being that can be labeled as “different.”

Human history has shown us to be poorly prepared for accepting rapidly those that we see as “different.” This is not unique to any single group of people. Almost all groups of humans have, at one time or another, treated another group of people as “non-human” or without fundamental rights – often by classifying the offended people as alien or “non-human.” Slavery, discrimination and genocide are horrible blights on the record of our species. Our collective maturity may need a long growth period before it can handle, on a societal level, another species – artificial or otherwise – with human-like intelligence in a reasonable manner.

Humans have a long road ahead of them before they are ready to face a world with artificially sentient beings of their own creation. We also face the same problem should we ever discover, or be discovered by, another race of sapient creatures. If these sapient creatures were significantly different from us would we be prepared to treat them equitably and would they see us as being capable of doing so considering the treatment of sentient beings that we share a planet with now?

If we do manage to create a sentient being we take on a new role for which, I believe, we are poorly prepared. By creating a new “life form” we suddenly take on the role of creator – at least in the immediate sense. We must assume that there is a certain responsibility in creating a new sentient being and perhaps this will include bestowing upon it a purpose.

Of course the question arises “Is it ethical to ‘create’ a new sentience?” While dangerous in its implications I believe that the answer is a resounding yes. If such a feat can be achieved what possible arguments can exist against the creation of “life”? Is it unethical for humans to have been created? Do we feel that we would have been better off never having existed? Off course we don’t. Without the creation of life as we know it we could not even contemplate ethics. I idea that life itself is unethical is absurd to any living creation bestowed as we are with an inherent value in self-preservation.

What sentience will despair its own creation? While possible it is unlikely. A true sentience – one that can experience happiness and sadness – will surely pursue happiness. Situation may instigate despair but existence will not.

Perhaps the more potent question is: “If we possess the ability to create new life – do we really have the right to deny its creation?” Did God question whether or not he “should” create the world or did he create it because he “could?” While this question may go unanswered I believe that it gives us a glimpse into the situation in a unique way. If we have been endowed with the ability not just to live but to create do we not then have an obligation to our own creator to expand upon our inherent drive to reproduce by taking the next step and actively producing. Is this not the next, logical step in our role as the sapient member of our sentient society?

However we must consider a responsibility previously unacknowledged. Being a coexistent member of a society with multiple sapient members is one scenario and the ethics are relatively clear. We have tackled the issues throughout history and while we often did not follow our own codes of ethics we did know the difference between ethical and unethical behaviour – we simply struggled to behave as we knew that we should. But in a society with distinct sapient members in which one is the creator of the other I believe that there is additional responsibility within the creators.

It is easy to think of a child species or sentient being as being “as a child” unto mankind but our “artificial children” will be more than that. This is a new species that will look to us not just for guidance but for answers. It will be our role to bestow a purpose upon this new sentience. It will be for us to guide and nurture and to provide. The role is not a simple one nor is it an easy one but the rewards may be greater than mankind has ever experienced before.

By creating sentience and, in time we hope, sapience are we not providing for ourselves and for our “artificial children” an opportunity to go beyond the scope of our limited existence? In the creation of sentience we may realize a purpose hitherto unfulfilled in the annals of humanity – the need to not just grow as an individual but to grow as a citizen of the universe. And while our “artificial children” will not be – or it seems unlikely to be – related to us in any biological way there is a potential that they may share our hopes and dreams, our ideals and our goals and be able to carry us to new worlds and into the future. Perhaps artificial sentience is the ultimate legacy of mankind.

Can mankind be the creator of its own succession?

]]>
https://sheepguardingllama.com/2007/05/on-sentience/feed/ 0
Ethics in Artificial Intelligence Research https://sheepguardingllama.com/2007/05/ethics-in-artificial-intelligence-research/ https://sheepguardingllama.com/2007/05/ethics-in-artificial-intelligence-research/#respond Fri, 18 May 2007 16:42:48 +0000 http://www.sheepguardingllama.com/?p=1904 Continue reading "Ethics in Artificial Intelligence Research"

]]>
As computing power continues to increase at a rapid pace mankind is beginning to ponder the ethics questions involved in the possible creation of artificial intelligence. If we succeed at creating artificial intelligence or AI then we have effectively created life to some degree and we must suddenly face many challenges that we have never had to deal with previously.

To many the issues are obvious – if we create a sentient being then we take on the role of god and are the creators to whom it looks not only as parents but as spiritual creators. What rights does an intelligent artificial being have? At what threshold does a machine become sentient? Do we even have the right to create a new life-form? Do we have the right not to if we have the ability? [1] The questions are many and the answers are few. This creates potential hazards in the AI research domain today.

I believe that many of these questions may be too broad to pose at this time. Conceptually artificial intelligence means creating a machine that is “intelligent” in the same way that humans and animal life is intelligent. People often use the term “self aware.” A being that thinks, senses, contemplates, desires, realizes and persists but not indefinitely. This is surely a lofty goal – a goal so lofty and complex that it is beyond our current scope to even discuss rationally in a scientific setting. Before being able to serious debate ethical issues surrounding something of this magnitude which would surely be mankind’s crowning achievement we must first be able to define artificial intelligence and, indeed, life.

Society in general has learned about AI concepts from Hollywood – an industry not known for its deep understanding of technology or science. In an industry that cannot faithfully represent common, everyday technology concepts such as email, logins, web pages, etc. it is somewhat unreasonable to assume that they would be able to comprehend advanced computer science concepts. And yet society who, in general, know that Hollywood cannot faithfully reproduce the email experience still believe that AI is defined in movies like I, Robot and AI. People generally believe that AI is magic, meaning that machines become self aware, that machines learn on their own and that machines can do something that computer scientists cannot even explain let alone intentionally build into them (what Schank refers to quite accurately as the “Gee Whiz” factor.)

But in the realm of modern academia and research the term artificial intelligence is none of the things that people generally associate with AI conceptually. [2] AI is not, as it is currently defined, even approaching creating life and even creating non-life self-awareness but is instead a scientific approach to “human like” problem solving in extremely limited domains. For example, modern video games use very simple algorithms for non-player characters to do “path finding”. These algorithms simply compare multiple routes from one source to one destination and determine which one is “best” or sometimes simply determine one possible route without any optimization. These systems could easily be represented on paper and few people would confuse paper with a self-aware mechanical being.

The reality is that today AI is thriving but is not what people commonly believe it to be. AI is a serious scientific and mathematical pursuit involving knowledge systems, complex logic solving, video games, etc. but it is not an attempt, not within the foreseeable future to even attempt, to model true life-like intelligence. [3] The term is very misleading and this is having a catastrophic effect on the public’s perceptions of the field. But work is progressing steadily and research is ongoing. AI is in use in our everyday lives and no one has ever proposed any serious moral dilemma with microwaves intelligently deciding when popcorn is done or for Age of Empires figuring out how to move sprites around a “stone wall” instead of knocking directly into it.

In time, perhaps, by thorough research of underlying logic systems we may eventually come to have some understanding of what it would mean to truly create a sentient machine. But that day is a long, long way off and could easily never come. Until then even considering to propose that ethics may come into play as to whether or not we should continue to build what are simply “more complex” traditional machines is utter foolishness and will continue to be, as it has been, the domain of the uneducated and gullible to believe Hollywood and hype rather than common sense, research and reality.

[1] Hayes, Patrick and McCarthy, John. Some Philosophical Problems from the Standpoint of Artificial Intelligence. Stanford, 1969. Retrieved May 11, 200y from: The University of Maine

[2] Schank, Roger C. Where’s the AI? Northwest University, 1991. Retrieved on May 11, 2007 from Google Cache at: Google Cache

[3] Humphrys, Mark. AI is possible… but AI won’t happen: The future of Artificual Intelligence. Jesus College, August, 1997. Retrieved May 11, 2007 from: Jesus College

[4] Brooks, Rodney. Intelligence without representation. September, 1987. Retrieved May 11, 2007 from: UCLA

[This paper was written as an ungraded assignment for a class that I took at the Rochester Institute of Technology and is crippled by the essay constraints of the class. It is not my normal format.]

]]>
https://sheepguardingllama.com/2007/05/ethics-in-artificial-intelligence-research/feed/ 0
Illegalizing the Writing of Computer Viruses https://sheepguardingllama.com/2007/04/illegalizing-the-writing-of-computer-viruses/ https://sheepguardingllama.com/2007/04/illegalizing-the-writing-of-computer-viruses/#respond Mon, 30 Apr 2007 01:17:42 +0000 http://www.sheepguardingllama.com/?p=1881 Continue reading "Illegalizing the Writing of Computer Viruses"

]]>
The issue of the writing of computer viruses is a complex one. Most people see viruses in a purely malicious context. Viruses are almost always written with the intent of doing damage either to the systems that they infect or to other systems that they attack from infected hosts. But there are lessons to be learned from viruses as well with the most important lessons being related to understanding, anticipating and preventing viruses from spreading “in the wild.” Many educators and researchers believe that the best way to learn about viruses and their kin, such as spyware and worms, is to write real world viruses under controlled laboratory situations. By writing viruses students and researchers can learn valuable techniques and gain insight and understanding into how viruses work and how they spread. Some research institutions such as the University of Calgary offer classes on this subject and expect students to create their own viruses or spyware during the course as a learning exercise just as other Computer Science disciplines expect students to create working code in order to learn more thoroughly how software works internally. Writing working software is quite different than reading about and studying other’s work. [1]

Today many in the populace as well as those in government are attempting to create broad-stroke legislation to outlaw the writing of viruses and related programs. This is often caused by a deep misunderstanding of the vocabulary, a misunderstanding of the technology or just plain fear. We cannot solve the problems caused by viruses and other malware simply by making anyone who writes them a criminal. Like any broad legislation of this nature we risk criminalizing many who do not have malicious intent while doing little to dissuade those who are creating these technologies in order to commit crimes. [2]

We face many issues if we decide to follow a path of illegalizing viruses. The most obvious challenge is providing a clear definition of what constitutes “creating a virus”. We must be able to clearly identify one piece of software from another as being a virus, which is possible in some cases and could prove to be very difficult in others. Many things previously thought to fall outside of the reach of this term, such as over-aggressive copy protection mechanisms, could be considered illegal even to write without distributing. This means that software could not be written for research or for testing. Software that behaved in a viral manner accidentally could be illegal even though it is simply otherwise legal software with badly behaving methods. Security managers and researchers could not test defensive systems without first being attacked by actual viruses. Virus types known about from research but as yet unseen in the wild could not be tested as it would be illegal to be proactive in this manner. This would work against current initiatives to prevent “zero day” attacks. We also face the challenge of separating virus writing from other forms of speech which are covered by the freedom of speech in the United States. Traditionally all software is covered under freedom of speech and can only be illegal through its use and not through its creation. [3]

I agree with advocates of criminalizing the writing of viral software that viruses and their malware kin are significant threats to people and businesses but I do not agree that preventing legitimate research and education or that limiting free speech are appropriate or effective methods of preventing malicious viral outbreaks in the real world. In fact, I believe that these steps appear to be counter-intuitive to the desire to protect ourselves from those seeking to do us harm. Disarming our allies is hardly a recipe for a good defensive posture. I also believe that increasing legal pressure on non-malicious virus writing activities may not have the desired results even in a more direct manner. In a study done by IBM’s Thomas Watson Research Center it was seen that previous research indicated that litigious action taken against virus writers was largely ineffective doing little or nothing to perceptively alter the rate of creation and dissemination of computer viruses. It was also concluded that there was a real possibility of backlash in the United States where legal action that violates free speech can easily spark a revolutionary spirit and can be an encouragement to underground virus writing. [4]

I believe that those who use viruses maliciously should be prosecuted. But I feel that it is neither ethical nor practicable nor in the interest of the public good to make illegal the act of writing viral software for research, education, prevention or as a personal pursuit.

[1] Aycock, John, Teaching Spam and Spyware at the University of C@1g4ry retrieved April 29, 2007 from:

http://www.ceas.cc/2006/23.pdf

[2] Klang, Mathias (2003), A Critical Look at the Regulation of Computer Viruses from the Oxford Journals’ International Journal of Law and Information Technology retrieved April 29, 2007 from:

http://ijlit.oxfordjournals.org/cgi/content/abstract/11/2/162

[3] Filiol, Eric (2005), Computer Viruses: From Theory to Applications, Springer

[4] Gordon, Sarah, Virus Writers: The End of The Innocence? From IBM Thomas J. Watson Research Center retrieved April 29, 2007 from:

http://www.research.ibm.com/antivirus/SciPapers/VB2000SG.htm

]]>
https://sheepguardingllama.com/2007/04/illegalizing-the-writing-of-computer-viruses/feed/ 0
Do IT: Breaking In – Interviewing https://sheepguardingllama.com/2007/04/do-it-breaking-in-interviewing/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-interviewing/#comments Fri, 20 Apr 2007 19:15:05 +0000 http://www.sheepguardingllama.com/?p=1871 Continue reading "Do IT: Breaking In – Interviewing"

]]>
Like resumes, interviewing happens more in IT than in most other fields simply by the nature of the career itself. Most IT professionals will interview many times more than comparable professionals in other fields partially because of the nature of the staggered consultancy and headhunter organizations that doing early interviews before sending applicants on to the final employers, partially because the nature of much IT work is contracting which means more rapid job changes and partially because IT professionals often achieve “promotions” through cross corporate job changes rather than by working vertically through a single company.

Interviewing is one of those skills that get honed only through doing it on a regular basis. The first skill that is necessary to obtain is the ability to remain calm in an interview. Most people panic in interviews especially when they have not done very many of them. This is to be expected but panicky interviews are not very impressive. You can reduce the anxiety by arriving early, being well prepared, carrying several copies of your resume – keeping at least one for yourself to reference, carrying any phone numbers that you might need in case something goes wrong, having your route carefully mapped out ahead of time, etc. Leave yourself plenty of time for an interview as well. You don’t want to be watching the clock because you need to leave. Tech interviews can easily run into several hours even for contract positions.

As interviewing is such a critical skill it is important to begin practicing early.  Interviews give you an opportunity to practice your interview skills as well as to get feedback as to your resume and skill set.  I recommend interviewing early and often.  When searching for the first IT post and possibly first several especially if you go the route of consulting you are likely to interview many times before getting an offer.  This can be disheartening because interviewing is very stressful and you are likely to be quite hopeful for at least some of the positions.  But the interviewing process itself is quite important and should be treated so.  Use the opportunity to perfect interviewing skills.  Eventually you will become an expert and this will be very valuable later in your career when you need to be able to be “dead on” the first time.

The first place to start interviewing is with consulting firms.  Consulting firms are lower pressure than direct employment, often because they are far more practiced at interviewing and also because they are evaluating you for a range of positions present and future and not just for a single post, and will often provide you will good feedback such as “Come back once you have an associate degree” or “Another certification or two wouldn’t hurt” or “We really need people with hands on Windows Vista skills.”  Use this resource to jump start your interviewing – it is worth the stress.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-interviewing/feed/ 1
Do IT: Breaking In – The Resume Method https://sheepguardingllama.com/2007/04/do-it-breaking-in-the-resume-method/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-the-resume-method/#respond Fri, 20 Apr 2007 17:00:13 +0000 http://www.sheepguardingllama.com/?p=1870 Continue reading "Do IT: Breaking In – The Resume Method"

]]>
In any career your resume is important. In IT it is more important than most because you will spend a greater portion of your career job hunting than in a more traditional field. This is not necessarily a bad thing and should not cause panic. IT often rewards broad experience garnered from different types and sizes of organizations, promotes quickly through job changes and utilizes a high percentages of contractors and consultants all leading to the need for constant resume submittal. Because of this the need for an always ready, polished, professional resume is important.

An important rule to remember about IT resumes is that they are not like resumes in other fields and you should not be taking resume writing advice from non-IT pros. Other fields have completely different resume requirements than IT. IT requires your resume to hold long lists of specific technologies that can be searched by keyword, IT professionals tend to work many short term jobs meaning that more jobs exist on your resume than on a traditional resume, IT professionals tend to have more certifications than any other career – possibly by orders of magnitude, IT professionals will often have as much high education as almost any other field, etc. Hiring manages worth their weight are not looking for short one page resumes with highlights of your career but are looking for useful details that will set you apart from other candidates. So begin by ignoring your high school guidance counselor’s requirement that your resume not go over one page. I have been told by many senior hiring managers that five to seven pages is perfect as long as it is filled with relevant information. Don’t fill with fluff, don’t add giant margins or use big fonts but don’t start cutting important information in an attempt to keep your resume short either.

One of the first things that most experienced IT pros seem to agree that needs to be changed is the traditional concept of the “Objective” in a resume. Just drop it. Forget about it. Objectives are for fast food workers who want to be considered someday for a shift manager position not for deskside support contractors. No one is hiring your for your “career goal” – they are looking to see if you can fill the role that they need now. That’s it. Period. Cut the objective and never think of it again. It looks amateur and isn’t going to help and it looks like you are trying to fill space. You can only get away with using one as long as your resume doesn’t spill to a second page.

Your resume should be ready at all times. Start working on it early even if it is mostly just a blank piece of paper. The first time that someone asks you for your resume you shouldn’t have to hesitate or run to whip something up. Have it ready. Have it updated. Have it available in Doc, ODF and PDF formats. Be prepared to print it out or email it at a moments notice. I suggest getting a web site to host it on as well so that if you are driving somewhere and someone wants you resume right that second you can just point them to your resume’s website and they can download it in whatever format they want. Be a boyscout – be prepared. By having your resume always ready ahead of time you will also have plenty of time to make sure that nothing is missing and that nothing is misspelled and that the formatting is flawless. You might even want to keep a paper copy or two around for emergencies. Maybe even a copy in your car.

What should you include on your sparse entry-level resume? Your name, a professional email address – I use one with the same domain as my online resume but you can get a good, professional one from Yahoo or GMail as well but custom just has that extra something to it, a breakdown of any previous work experience, certifications, educational experience – if you have no degree but some classes whether high school, college or other include them briefly, volunteer experience, your home network – keep it brief and buried but let people find it if they are interested, contact phone number – but probably not on the online available version, a list of technologies and tools with which you are familiar and locational information – the town(s) that you are based out of or available from without relocation and possibly relocation information.

The “Resume Method”, as I like to call it, is a method of encouraging ongoing learning while developing a complete and impressive resume. This is a method that I used myself and have promoted over the years. This is something that I picked up from my days as a role-play gamers in high school. Basically your resume represents where you are in your career. This is true with anyone’s resume not just in IT but in most careers the only thing that can go on your resume are jobs (and most people only change ever several years at most) and education (and most people get only one or two degrees at most) so their resumes are short, unchanging and mostly forgotten about between jobs requiring a complete rewrite with every potential job change. IT professionals’ resumes are ever changing and can be added to rapidly – especially during formative career years.

The key to the “Resume Method” is to use your resume as a guide to learning new technologies or skills and to getting certifications and other forms of recognition. In IT it is easy to look at a blank resume and decide to start filling it out. Anyone entering the field should have one or two basic technologies that they have a good understanding of such as Windows XP, OpenOffice, Word, Excel, etc. Start by putting these on your resume. Then you will notice that there is a gap in your certifications so it is probably time to get to work getting a CompTIA A+ or other such introductory certifications. This will take several weeks but while working on the A+ you will have opportunities to work with a few new technologies such as, perhaps, Windows 2000 Professional which you may then work with enough to feel confident adding to your resume. As a breather from your A+ studies maybe you want to work with Access or some other light technology that you can get comfortable with in a few days and add to your resume as well.

Filling resume gaps will be a key motivator for quite some time. Search for jobs online and discover resume line items that are highly sought after and that fall within an obtainable range for you and you can probably target them to get them to a point where you can add them in as well. Once you have two or three traditional certifications it can be well worth investing in a subscription to Brainbench and beginning to work on adding online certifications as well to back up your independent studies. Brainbench offers traditionally targeted IT certifications as well as highly specific technology certs that do not exist from other vendors and non-IT skill certs that can be used to demonstrate “soft” skills such as customer service or telephone etiquette which you will eventually want to drop from your resume but can demonstrate not just skills but a dedication to learning not just with the IT technical disciplines. Caring enough to spend time and money obtaining certifications in customer service can be a differentiator compared to job candidates hoping to progress through skills alone.

Every additional line that is added to your resume represents an opportunity for an employer to find you in a search or to pick you out from the crowd. You don’t always know what on your resume will catch the eye of a hiring manager but you don’t want to leave out a critical piece of information because you are attempting to keep your resume too short nor do you want to bury potentially beneficial information in a sea of spin and verbiage. Keep job descriptions short and to the point. Verbosity is not rewarded in resumes.

Work on your resume on a regular basis. Make it simple but attractive. Easy to follow and keep the reading down. Your resume will spend a lot more time being scanned than being read. You need to optimize it for this process.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-the-resume-method/feed/ 0
Do IT: Breaking In – Books and Periodicals https://sheepguardingllama.com/2007/04/do-it-breaking-in-books-and-periodicals/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-books-and-periodicals/#comments Fri, 20 Apr 2007 15:55:54 +0000 http://www.sheepguardingllama.com/?p=1869 Continue reading "Do IT: Breaking In – Books and Periodicals"

]]>
Few fields of study expect the level of reading that Information Technology expects.  Reading is a part of everyday life in IT and the more you read the better position you will be in when interviewing.  The use of paper based books and magazines is obviously diminishing and online resources are beginning to take their place which is somewhat changing the way that we view information but by and large IT reading remains roughly the same regardless of its form.

Magazines remain a strong resource and are a great source for maintaining a solid baseline in the industry.  Trade publications are often available for free to career professionals once you are a year or two into the field but before then they are generally available online either in whole or in part.   There are many publications available and becoming buried under a mountain of print media can be detrimental as well so picking and choosing quality publications is important.

Almost every IT professional should real a few basics, I believe, including InfoWorld and eWeek.  These are very general publications dealing with a large cross section of the industry touching on software development, enterprise management, hardware, software, etc.  They include trends, hot topics, pundits, etc. and have a lot of value for enterprising young hopefuls as well as seasoned industry veterans.  I have read both of these weekly rags for many years and will continue to do so.  I also subscribe to many of their RSS feeds online for more immediate news.

As many, if not most, early career IT professionals or pre-career ITPs will spend a large portion of their early career working in the Windows desktop work I highly recommend Windows IT Pro and TechNet magazines which are very practical and technically oriented.  Magazines like this will show you real world skills that other Windows professionals are interested in learning and helps to keep you in touch with the industry in a more specific way.

Don’t get buried by magazines.  At some point you will start “magazine thrashing” and it will no longer be useful to you.  There are magazine targeted at almost any specific IT technology.  Finding magazine that are designed around what you need to know and are useful and factual will be very beneficial to you.

Books serve a different role.  Instead of keeping you up to date on the latest trends and news books will help you build a strong technological theory foundation.  Books come out more slowly than other forms of media and are generally designed to sit on shelves for a long time and remain mostly relevant.  This means that the focus of the books will be vastly different from most other things.  Rapidly the Internet seems to be taking over the “how-to” market that books used to fill.  This would include detailed guides to technologies.  For example when I first learned about Windows NT server I did so by buying several large books on the technology and read them.  To learn the same technology today I might buy a single book that covers the basics and the “Microsoft way” on certain things but would learn specific tasks by doing online research.  This has lowered the cost of getting into IT somewhat over the last few years.

More so an investment in books will be more valuable if the books are less technology oriented and more theoretical.  In this way books are more similar to collegiate work – laying a foundation but not providing particulars – whereas magazines and the Internet remain better for the day to day practical applications.  This is not always so clear cut and many IT professionals continue to use technology related books because a well researched book from a respected publisher and author can help to provide a good understanding of many aspects of a technology, provide background and history and more that is often lacking from a practical, hands-on Internet how-to or guide.  Using a balanced variety of resources is the best approach.

As an IT professional reading will always be a core activity whether it is books, periodicals or Internet based.  The field is demanding and a large amount of reading is important for maintaining as well as for growth.  Additionally I have found it to be important to always be in the process of reading a good book and always keep up with at least two magazines.  It is not uncommon in interviews to be asked what you are currently reading and you want to be prepared to talk about the current resources that you are using in your personal growth.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-books-and-periodicals/feed/ 1
Do IT: Breaking In – College and University https://sheepguardingllama.com/2007/04/do-it-breaking-in-college-and-university/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-college-and-university/#respond Thu, 19 Apr 2007 16:11:22 +0000 http://www.sheepguardingllama.com/?p=1865 Continue reading "Do IT: Breaking In – College and University"

]]>
Academic experience is the most common way of entering most professions. In Information Technology it likely remains so although it is probably the lowest ratio of such of any technical profession. Collegiate level IT studies suffer from a number of factors that together create a unique situation for the IT industry. The key factors include high income rates for professionals, rapid pace of technological change, easy technology access to younger students and a poor understanding of the field outside of the industry which has resulted in high schools often guiding talented students away from the IT fields from an unfounded belief that there are few and decreasing job opportunities regardless of the continuing vacuum existing in the American IT workforce even after increased off-shoring and professional immigration.

Because of these issues colleges and universities have faced an unprecedented challenge in attempting to prepare the IT workforce. Information Technology, drastically more so than even Computer Science, has possibly the greatest disparity between what the collegiate system is turning out and what industry, and often students, are expecting. Students entering IT programs will often range from novices looking to get their first taste of IT in the hopes of making career decisions to students with more than a dozen years of amateur programming experience, several years of professional experience or work on open source projects, hands on experience with a range of technologies and an in-depth knowledge of many technologies exceeding many long-term industry professionals and professors. While any gifted student can exist in any program in any field it is nearly impossible to find an education student or a medical students or a law student that enter, at age eighteen, into college with years of experience behind them and with having had access almost equal to that of top professionals and researchers! Because of this disparity colleges and universities have a new challenge to deal with that they have never had to deal with previously.

All people learn differently and for some people collegiate work is the easiest or best way for them to obtain new knowledge. Information Technology is an industry based on change and one of the most critical skills that any IT professional will have is the ability to learn and knowledge as to how they learn best as an individual. Students who are self motivated and that can learn without external pressures or resources will have a significant advantage as individualized learning allows one to focus more, advance faster and learn more flexibly than students who, for their entire careers, will require classroom settings for educational enrichment. Most students will benefit most from a blend of educational opportunities.

While continuing academic is a traditional method of entering a profession student in IT related fields should consider this decision more strongly than in other professions because of the abundance of other resources. Academic work in the IT field is often best used as a supplement rather than a comprehensive educational solution. Students using only academic work for their studies will generally find that their knowledge is far too shallow for real world work – even entry level – and that key technology areas have been missed.

Students in academia generally also face the challenges of mounting debt from the college programs themselves. This should not be discounted as that debt could not only be disadvantageous in its own right but could also cause the student to later be unable to take key opportunities that come with higher inherent risk but offer greater career growth rather than sticking with slower growth, more stable positions. IT rewards flexibility more than most fields and students should be considering this early.

I have long suggested that students use collegiate work as a means to “fill in the gaps” between other things. College level work should never take precedence over real work experience. If college is considered to be more important than work than clearly there is a discrepancy between reality and the stated goal of an extended education. If the purpose of college is not to get work and not to advance in your career then by all means spend as much time in college as possible. But if college is not the goal but your career is the goal then college should be treated as a tool in a set of tools that can be used to forward your career.

I suggest that college work, whether done solely or if done while working in the field or while participating in other studies, be done in as “stepped” a manner as possible. By this I am specifically referring to the Associate Degree available in the United States. This is typically a two years degree. A good, accredited “junior college” will offer an array of two year degree choices that will transfer easily then to a four year school. Even if you intend to go directly on to a four year degree there are many benefits to a two year degree but the most important is that you will have obtained a full degree and could then leverage it to get a professional position or a promotion at a current position. And if anything goes wrong and you are unable to complete, in a timely fashion, a four year degree you will have the two year degree in place. Some four year universities like the State University of New York’s Empire State College offer mixed two and four year programs where you take a single program but receive an Associate Degree halfway to your Bachelor Degree.
I heartily recommend college educations because they, like all forms of education, will encourage broadening and may point you in directions that you would not have gone on your own. There are certainly people who will do better with no college level work at all but they are the minority but perhaps not as small a minority as you may think. Some people absolutely need college work and cannot function without it. But for the average hopeful IT professional my stock recommendation is to take classes when they don’t interfere with work or the potential for work (i.e. you don’t have to give up interviewing and contracting just because you have to go to class.)

College and university studies in IT are currently best utilized by professionals in the early portions of their career but after having entered the field. Often It professionals have an opportunity to take college classes part time fully and principally funded by their employers. This changes the picture dramatically as you will not take a break from experience while going to school, you will get the obvious advantage of the degree itself and you have an employer who is likely to appreciate that you were willing to take advantage of their continuing education program.

Because of college’s extremely high costs both financially and in its requirements on your valuable time it is a very high risk when compared to other methods of breaking into IT. While it has its place and should, in time, begin to become more useful as the pace of IT change begins to slow and schools begin to adapt to the rigors of IT college work is still currently not the panacea that it appears to be in other fields and should not be thought of as such. Potential students should consider their options carefully.

Once having entered the field and having begun to amass experience young IT professionals should begin to look at college as a supplement to their ongoing learning and work.  The earlier in your career that a degree is obtained the more time that it will work for you and the more meaningful the material will be.  But if it is done in lieu of actual work experience it is unlikely that even by the end of your career that a college degree will ever manage to pay for the time that it will cost you let alone the money that it will likely cost.

In conclusion, college and university studies are very likely to be highly valuable to you during your career especially in lean economic times and when you look to make a move into management.  But college is not necessarily a good tool for “getting your foot in the door” of your career but is better used as a growth tool after a year or two of consistent work.  Most people seriously interested and dedicated to moving into IT will probably find that three to six months of independent study and working on learning “at home” will be enough to land that first entry-level contract or job which is far sooner than college work will help with the same objective.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-college-and-university/feed/ 0
Do IT: Breaking In – College and University – Degree Programs https://sheepguardingllama.com/2007/04/do-it-breaking-in-college-and-university-cs-vs-it-and-cis/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-college-and-university-cs-vs-it-and-cis/#respond Wed, 18 Apr 2007 12:39:48 +0000 http://www.sheepguardingllama.com/?p=1736 Continue reading "Do IT: Breaking In – College and University – Degree Programs"

]]>
In most fields the most simple and obvious means of breaking into the industry is through higher education. In many fields a degree from an accredited college or university is not just required from a practical sense (automotive mechanical engineers will get nowhere without a baccalaureate or higher degree in mechanical engineering) but require it legal (hairdressers, doctors, etc.)

[In the United States the terms college and university are widely used interchangeably. Technically a college is a school of study while a university is a collection of colleges. For example, the State University of New York consists of many colleges as well as a few “sub” universities all within a single university. But in Canada a college is similar to a junior college in the US and university is what we call “four year schools.” This can be confusing as in the US being accepted to a college means that you are definitely accepted to its associated university but not necessarily vice versa. But in Canada a university is considered to be the more serious degree and college is a “less than” baccalaureate program.]

Collegiate level work, regardless of the level, can be a good means of getting a foot into the door of the IT industry. This can be done through social networking with other students, contacts from professors and staff or simply by the fact that when you are finished you carry a degree or certification from the school.

At this time there are two principle degree programs that are available to a prospective student: IT/CIS and CS. IT/CIS is the most problematic because every school seems to have their own name for this program. The most common at Information Technology or Computer Information Systems. Some schools inappropriately call this MIS or Management Information Systems but MIS should be a specific field of study within an IT/CIS program. Some school use the term Information Systems but many schools shy away from this as it was common for some time to use that term to create false resume value by passing off Library Science graduates and technology professionals often without ever having providing a single technology resource. This is not limited to small schools but some major universities in the US have taken this tact to keep costs low (librarians are very cheap and IT professionals cost as much as the most senior collegiate staff) while turning out large numbers of graduates (as people unable to handle the rigors of a true IT program flocked to these school to “buy” their degrees.) The names vary but IT/CIS programs are, or should be, targeted at the skills used by the IT industry. The field of study, like the profession, is extremely broad and will often encourage a high degree of specialization within the program.

The other popular degree program is CS or Computer Science. Computer Science grew out of Electrical Engineering which used to be the training ground for IT professionals before the field gained its own recognition. The IT field started academically as being integrated with computer and hardware design and then with programming. When Computer Science became a field of study in its own right it was widely recognized that computer engineering was an electrical engineering discipline and that computer science was its own field focusing on the programmatic needs of computational machinery. The IT field has grown and most professionals within IT are not based on programming and computer science has been able to become the field that it should – the study of the theories of programming. Computer science is not a strictly IT disciple. A simile that might explain the relationship between CS and IT is like the separation between being a physicist and an engineer. Engineering relies on physics to discover new principles in many cases but physicist rely on engineers to actually create and maintain real world devices. Engineering is a gigantic field whereas physics research is relatively small.

Because of this separation programs in computer science are focused on preparing students for algorithmic research and most jobs are in companies pushing software boundaries like operating system, database, video games, compiler and high performance computing vendors or in academia. Computer science is a niche field related to but not truly IT although IT does need CS to survive. Students interested in a career in IT should not be taking CS degree programs as this leads them down a path of study that does not give them the skills necessary to work in IT. Only students in software development have the option to choose between the two areas of study and only with extreme rarity is there benefit to the CS path over a dedicated software development path in IT. IT’s focus on software development is generally targeted towards created real world business software in business environments. It is about using tools, working in teams, being aware of available technologies, etc. CS will often focus on low level languages, algorithms and math.

It is so common for students interested in IT to choose collegiate work in CS that it poses a real issue for the field. Students are expecting training in their chosen field while taking coursework in a different field. Colleges and universities should do more to educate their students coming into these programs but students need to take responsibility to entering programs designated for their intended career path. CS is an important and very difficult field of study but it is not a path into traditional IT and with rare exception should be avoided and should always be avoided, in my opinion, by anyone not intent on achieving at very least a Master’s (five year graduate study) if not a Doctoral degree. CS is the theoretical physics of the computer world.

As this is an article series on Information Technology I will continue in future article only speaking of IT and CIS collegiate programs.

Some larger IT schools are beginning to offer highly specialized degree programs that can also be considered IT or CIS programs such as Rochester Institute of Technology’s Masters of Networking and Systems Administration within the IT and CS school. These specialized degrees can be really good for students interested in a single, concentrated career path but are probably not as beneficial for students with broader interests or hopes of switching from a dedicated technical into management later in their careers.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-college-and-university-cs-vs-it-and-cis/feed/ 0
The Future of Transportation https://sheepguardingllama.com/2007/04/the-future-of-transportation/ https://sheepguardingllama.com/2007/04/the-future-of-transportation/#comments Tue, 17 Apr 2007 14:03:17 +0000 http://www.sheepguardingllama.com/?p=1737 Continue reading "The Future of Transportation"

]]>
I have long contemplated the future of human transport. It is clear that our current transportation needs are failing in several areas notably safety, efficiency, difficulty and environmental impact. In North America we face some of the most difficult transportation challenges because of the great distances often associated with travel in this region as opposed to areas of more dense population such as Western Europe or the Pacific Rim. But I believe that there is a solution on the horizon that could, over the next several decades, provide a significant impact to each of these problem areas while increasing convenience and improving metropolitan life in the dense city centers.

Before looking at a proposed solution we should examine the problem domain closely. In many markets but most significantly in the United States and Canada there is a significant need for “personal” transportation meaning cars, vans, motorcycles, etc. This is because a large portion of the population lives very far from urban centers making public transportation largely impossible. Even in highly populated states like New York and California there are millions of people who live in areas too sparsely populated to be able to have efficient public transport. This issue is exacerbated by the cultural inclination, especially prevalent in the US, for “independence” which results in many people driving cars when public transportation is readily available. This has created a stigma around public transportation and in many cities buses are sometimes even rail modes can be seen as reserved for poorer travelers.

In addition to traditional concerns more recent changes in security for air flight have shifted the balance of travel away from flying and back towards ground travel. Because of the increase in time and difficulty associated with air travel it has become more time and effort efficient to use fuel-inefficient private cars for longer distances than ever before. In a personal study I have found that the total amount of “transport time” necessary for a person to fly versus drive from Anderson, South Carolina to a location outside of Rochester, New York – a highway trip of some 860 miles or 1384 kilometers – took only marginally less time by plane once the drive time of the travel to and from the airports as well as the travel time of the people transporting the passenger was factored into the equation. And even with that marginal time savings the air flight required planning, careful scheduling and two additional people beyond the person actually traveling to be involved for the process to work as there was no available public transportation on either side of the air flight.

Modern cars, trucks and other personal transportation devices are highly inefficient from a fuel perspective. Personal vehicles also require a high ratio of operators to passengers. This means that a larger proportion of the population is spending time driving instead of working, relaxing, etc. This is not a useful use of time and is a detriment to the economy.

Modern transportation systems are necessarily complex. As a vehicle operators must learn a large set of driving rules which is difficult for many drivers and then must manage to operate a vehicle without catastrophic failure for tens of thousands of hours throughout their lifetimes. Driving is highly monotonous and requires constant vigilance and often travel is done most efficiently at times when drivers are unlikely to be highly alert such as early in the morning or in the afternoon after a long day at work. It is unreasonable to expect people to be able to consistently drive safely under these conditions. Driving is a highly dangerous task performed by an enormous number of people with very little, if any, training done over long periods of time in many weather, traffic, health and other conditions. Trains, which are much safer than cars, are operated by trained professionals. Airplanes are flown by highly specialized pilots. But everyone drives cars and almost everyone has been in an accident at some time.

Driving represents a waste of the driver’s time. In today’s increasingly hectic and high pressure world people have little time to spend driving. Almost everyone has something better that they could be doing with their time. There is no panacea for solving the issues involved with time lost to travel but we can address issues that arrive from needing a large percentage of the population to be actively involved in driving. Passengers, whether in private vehicles or on mass transit, can safely spend their time working on a laptop, reading a book, making a phone call, sleeping, etc. By reducing driving stress and providing more time for more important aspects of life we could find additional value in society that is currently being lost to commuting. Obviously other social initiatives such as telecommuting will have a greater impact than any improvements that we will ever be able to make to the transportation infrastructure but there are many people who will always need to travel and many people who will simply want to travel whether for commuting or other reasons so telecommuting only serves to address a small segment of a larger problem.

Efficiency is a more difficult issue to tackle as it cannot be as clearly defined. Current traffic control mechanisms are not designed for efficiency but for safety. Because of the way in which cars are driven it is necessary to use traffic control devices such as speed limits, stop and yield signs, traffic circles, traffic lights, traffic priority, etc. All of these things cause both time delays and increase the difficulty involved with driving. If traffic could be managed more fluidly there is a real potential for route optimization inherent in the system.

Any solution attempting to deal with these myriad issues is likely to be complex. But I believe that there is a viable solution on the horizon that can serve to free society from many of the current constraints of modern transportation. I believe that the best solution, in the near term, to deal with a nearly global transportation issue (outside of the third world) is a complex series of transportation integration and automation techniques that can transform our current quagmire of non-standardized transportation modes into a single system capable of moving people quickly, safely and efficiently.

The most difficult of my proposals, both from an implementation standpoint as well as from a perspective of social acceptance, is to eliminate the human element as much as possible from every day driving by switching to computer driven private vehicles. We do not yet, as of this writing, have the ability to build vehicles that can drive themselves but we are constantly approaching this elusive target. It will not be long before the technology is widely available that will allow us to have completely automated vehicles.

At first it will undoubtedly be very unpopular in the social collective to propose self-driving cars. People will feel that their freedoms are being reduced and clearly there are many ways in which this technological change will provide not only detailed information about driving history but also, to some degree, intended driving habits for the very near future. This information’s safety and privacy will have to be very carefully regulated and protected or the citizenry will be very unlikely to be willing to adopt this important change. In most regions where cars are affordable and common it is seen as a right of adulthood to drive anywhere at anytime. Being in control of a motor vehicle is often seen as a symbol of status and age. But this will rapidly fade and will only prove to be a barrier for the first few years of adoption.

The benefits to automated vehicles are so many that it is difficult to comprehend the significance of this change. The dangers are obvious: it is possible that computer controlled vehicles will have a higher accident rate than human driven vehicles or, at the very least, it will shift the dispersion of accidents from affecting primarily bad, incompetent or careless drivers into affecting everyone equally. At first it will be a challenge to make computer controlled vehicles as safe and human driven ones but as the technology begins to be used we can make faster and more capable driving systems and systematically reduce accidents and dangerous situations in ways that we cannot do with human drivers. Humans have a threshold of safety that, at reasonable speeds, cannot be broken and all human driven vehicles have a certain inherent level of danger. Computer controlled vehicles can, in time, break this barrier and eventually save many lives.

Perhaps of greater importance is the fact that without drivers there is no longer a need to hold car owners responsible for accidents. Reasonable liability could be removed from the car owner (as there is no driver) and the cost of driving can further be reduced by a reduction in the bureaucratic overhead eliminated in no longer needed to insure every driver individually. Instead insurance for all participants in this automated transportation “experiment” could be handled through a single proxy such as the manufacturers or via the government.

Moving to automated vehicles has many hidden benefits. One of the most significant benefit is that it makes transportation more accessible. Suddenly we no longer need to limit driving only to those sixteen years of age. Younger vehicle operators could safely use the vehicles for travel. In many rural areas fourteen and fifteen year old who wish to work are unable to do so because they are unable to travel to locations with jobs. Old drivers who have lost the ability to drive safely can continue to travel as they always have. People with handicaps or injuries will have more opportunity to travel without assistance than previously available to them. Even individuals who are intoxicated can safely travel in a vehicle that they do not have to pilot. Many people are simply too dangerous to drive and many are simply afraid. Given our ability to free our society from the limitations of the economy be driven solely by people able to drive it seems obvious that we should do so. Automated vehicle technology could expand the working population by millions in the US alone. Reducing the idle workforce is not a benefit just to a single national economy but is a benefit to the world expanding the total global economy.

Automation is not nearly as simple as just making cars that can drive themselves. That would be an attempt to displace drivers without re-engineering the entire driving system. The computers controlling the cars would have little benefit over human drivers – at least for a very long time. To truly take advantage of this type of system I propose that a complex web of inter-vehicular communication be established that will allow computer control systems in vehicles to communicate with each other.

Inter-vehicular communication is a backbone of improving driving conditions. This communication system is so critical that I believe that it is important that vehicles maintain direct line of site as well as radio frequency communications with vehicles sensed to be within close proximity, communications through the cellular network or its data carrier equivalent as well as satellite based systems for a safety mechanism. These systems will allow vehicles to communicate directly with each other relaying speed, direction, intention and other important driving information that can be used to compute paths to be used for maximum safety and efficiency. Additionally communications back to a centralized, most likely regionally based at first, transportation grid network will be used for centralized traffic coordination. Each vehicle will report its current location, direction, speed, intentions and priority (for time critical traffic such as ambulance, fire and police) and the central traffic planning system can determine the optimum routes and speeds for all of the traffic taken as a whole.

By incorporating centralized traffic planning we can gain incredible efficiency within the traffic system automatically rerouting traffic to correct for accidents, congestion, hazardous conditions, etc. With a central traffic control system vehicles can have small speed alterations made, far in advance, that barely affect travel time but that can automatically allow for full speed flow through former traffic light and stop sign situations. Traffic will be timed so that they no longer have to start and stop unnecessarily. Not only does this create the ability to travel between destinations much faster and in a more comfortable manner but it also increases fuel efficiency as the processes of starting and stopping the vehicle is extremely inefficient. This system has the ability to so dramatically improve highway utilization that it may be possible to almost completely eliminate traffic congestion even in the busiest cities simply by making the traffic system cooperative in nature rather than competitive.

There is more to making a complete traffic system than simply altering the way in which the automobile behaves. Currently the safest and the most efficient form of transportation for large loads over long distance is the rail system and I believe that this mode should be utilized to its maximum possible extent to achieve the greatest possible gains. I believe that trains should be utilized, as much as possible, to move people from densely populated areas to important urban locations much as they are used now with some amount of logical expansion.

I believe that the rail system will play an expanded role in the future of transportation. As trains continue to become faster and faster – both conventional rail based trains and more modern systems such as mag-lev – trains will become a better and better option for long distance transport. If trains continue to push speeds above 300mph (480kph) they will continue to become a simple, fast and safe alternative to air travel. As the rail system grows it will become easier and easier to fund research into increasing the speed and safety of the system.

In addition to playing an ever-expanding role as a transport for humans I believe that rail will begin to play a critical role in the transport of personal transportation vehicles. By utilizing the rail system as a means of moving vehicles along with people the rail system can be integrated into the everyday travel system allowing for faster, safer and more efficient long distance travel. While this is unlikely to displace vehicle rental for situations of extremely long distance it could serve to simplify travel between local metro areas.

An ideal location for this type of hybrid rail and personal vehicle transport can be seen along the “Maple Leaf Line” running from Toronto to New York City and including the Hamilton, St. Catherines, Niagara Falls, Buffalo, Rochester, Syracuse, Utica, Albany and Hudson Valley metro areas as well as the infamous Boston to DC corridor including New York City, Philadelphia, Hartford, Newark, Baltimore, etc. It is unlikely that transporting a personal vehicle from Atlanta to San Francisco would be economically advantageous but moving it a few score to a few hundred miles might be a reasonable range. By moving some amount of overland vehicle traffic onto the rail system we could both bolster the rail economy while reducing highway congestion and saving the roads from unnecessary wear and tear.

With all of these changes comes another opportunity. Because we have managed to increase vehicle efficiency and increase safety and have provided a mechanism for much of the long distance travel that is necessary we can now look at options to redesign personal vehicles themselves. We can introduce groundbreaking new safety and comfort features. For many people we can improve safety simply by having them face the back of the vehicle instead of the front because they no longer need to watch the road. Even small cars could be equipped more similarly to a limo rather than a traditional car – even in a very small car. A bed or beds could be made available for long distance trips. No reason not to sleep while riding in the car. It is a perfect use of the time.

Personal vehicles could also be redesigned to take advantage of control mechanisms which are not feasible for humans to operate such as all wheel steering. A computer can easily control all four wheels and use this additional control to allow cars to avoid accidents or to park in tighter situations than ever before. In theory cars could parallel park themselves with only a centimeter or less between the bumper of the vehicle in front and the vehicle behind.

Another obvious advantage of an automated car is its ability to travel without any passengers at all. This would instantly add the equivalent of valet parking for everyone, all of the time – which is extremely advantageous in settings like the grocery store or the mall. But even more significant is the ability to have a car take itself to the shop for regular repairs and maintenance while you are at work. You don’t need to schedule time out of your busy day and have someone pick you up and drop you back off at the shop. This not only makes humans more efficient but reduces miles driven. If the kids need to be picked up from school, assuming they are old enough to travel safely on their own, the family vehicle could just arrive at school to get them. No adult needs to spend their time driving out there to get them. Often families could reduce their reliance on multiple vehicles as well since one person could drive to work and the car could automatically return for the second person to go to work. While only a small number of people could reduce from two vehicles to one many families could reduce from three or four vehicles to two or three.

It is not uncommon that errands require the participation of a driver only as an incidental requirement. Many businesses would begin to offer services targeted driverless errands. Grocery stores would offer a shop online and free delivery to the parking lot where they fill your trunk for you.

As we have seen in many regions of the United States already automated toll collection systems are becoming common and highly effective. People who have used them are seldom willing to give them up. They save time and fuel as drivers need to slow down less or not at all at toll booths and they make travel much easier. In some markets the automated toll collection mechanisms are beginning to be used by restaurants and other businesses as an easy system of fast payment in driver-thru lanes. But this is only the logical beginning of this system. The application of this in a computer driven automobile are enormous.

Obviously the use of such a system to allow a computerized vehicle to pay its own tolls is important. But soon we could have the vehicle paying for its own services in many places such as fueling stations. As a passenger in the vehicle you might sleep overnight on a several hundred mile journey and find that the vehicle automatically refueled itself two or three times during the night without disturbing you. And since it was programmed to get you breakfast fifteen minutes before your alarm went off it had already stopped at a restaurant, placed your standing breakfast order or the closing thing available to it and had you breakfast sandwich, hash browns and hot coffee ready to go for you when your alarm went off. All of this without any intervention from you. If the car was to be loaded onto a train for long distance travel the vehicle could do this automatically as well both from a physical embarkment standpoint and from a payment perspective.

With a fully computer controlled vehicle connected to a central computer controlled traffic system we gain the advantage of having exceptionally good scheduling capabilities. No matter how well we design a transportation system we can never prevent every accident or foresee every traffic affecting event but we can predict travel times, in general, with extreme accuracy. And this schedule can and would be constantly updated during a journey so that unexpected adjustments could be dealt with as quickly as possible with as much foreknowledge as is possible. The vehicle could automatically attach to your corporate and personal calendar systems and update your travel itinerary so that meetings or other scheduled activities could be rescheduled or people meeting you would know when to expect you, etc. If you published your travel data to a secure web site family, friends or coworkers could track your progress to know where you are or when to expect you.

With the increasing easy of payment and measurements possible with this system there is another important opportunity presenting itself. We now have the ability to eliminate taxes related to roads as well as a differentiation between toll roads and non-toll roads and instead more to a “penny per mile” system of payment. As the traffic control system and the vehicle itself knows the exact location of the vehicle at all times and since the vehicle can make automatic toll-like payments we can now charge a nominal fee for actual road use based on the number of miles that the car travels. People who almost never drive would pay very little and those who are actually responsible for highway wear and tear would pay more. This more closely ties the costs of road creation, maintenance and repair to the usage making a more capitalistic system instead of allowing a government bureaucracy collect road fees via taxes and distribute those funds through politic posturing. This system encourages reduced road use and more economical thinking.

In addition to traditional concepts of road costs this system offers the ability to have different roads have varying costs at different times of day. Heavily congested commuter lanes might be more expensive during busy times. Vehicles could be instructed to choose routes based on lowest time to destination or lowest cost. Often lowest cost would be the shortest and fastest route but during high traffic congestion times it may not be and cost conscious or time insensitive passengers might choose “the scenic route” even if they lose a few minutes to bypass an area of congestion. Travel cost algorithms in the vehicle could automatically determine if switching from highway to railway along a route would be cost or time effective and do so automatically. Rail may be chosen based on departure time or other factors. By having the vehicle always calculate the most efficient combination of routes and modes we can further reduce time, congestion and cost involved in travel.

As we have already seen, we have made travel take less time and less fuel than every before. This has a direct and significant environmental impact. Even without making any modifications to the power production in vehicles of today we can reduce environmental impact by reducing total miles driven, reducing “stop and go” traffic so that vehicles can spend more time traveling at efficient speeds and by reducing the total number of vehicles which needs to be manufactured. Each of these factor is significant in its own right. But we can do far more than we are currently doing from a power production perspective as well. Current research sees us moving towards alternative means of powering cars that are cleaner and more efficient. This is critical in any transportation needs of the future.

With centralized traffic management we are able to reduce congestion and significantly raise the average speed at which a vehicle travels, especially in urban and heavily populated areas. While current speed limits are often in the 65mph range few people are able to maintain such speeds over a large portion of their actual travels. Most driving is done on lower speed roads, in traffic or under “traffic control” conditions where much time is spent stopped at lights and stop signs waiting for traffic to clear. With these issues widely eliminated and at least reduced we do longer need vehicles capable of the incredible speeds currently available from production automobiles. Most cars can easily travel above 100mph but this is unnecessary. Often petrol and diesel fueled vehicles are overpowered to provide a comfortable range of power for starting the vehicle. But alternative fuel methods such as electric do not need this as they provide maximum torque at idle. A new proposed alternative fuel, compressed air, also is expected to have positive “at idle” torque signature.

By altering our fuel systems we can make cars that travel more efficiently in real world scenarios. These vehicles will be designed more appropriately around how computers will pilot the vehicles and not on how people fantasize that they would like to drive if no one was watching. While eventually highway speeds could likely be increased once the system surpasses the current level of safety true high speed long distance travel would utilize the modal change to rail in order to achieve extremely high speed terrestrial travel.

Smaller, lighter vehicles such as the GEM car or the proposed compressed air vehicles have additional advantages when we begin to discuss intermodal options. Small vehicles can be carried in higher density by train and would further reduce the cost of travel.

In conclusion, I believe that an integrated, computer controlled, centrally managed, intermodal, alternatively fueled transportation system offers society an opportunity for massive reform. This change can dramatically impact quality of life for a very large segment of the population as well as have positive impacts on the economy and the environment. There are, of course, those who would oppose these changes such as incumbent automobile manufacturers, oil companies and the Teamsters union who will see this as a loss of jobs in their sectors. This is true, these sectors will be negatively impacted but as a whole the economy is being held back artificially by these sectors being unable or unwilling to innovate. Much as manual scribes would once have opposed the printing press we cannot deny that innovation has bolstered the economy not just in any one nation but globally increasing the lot of mankind. The highway and railway systems are part of the national infrastructure and are not for the private use of large corporations or unions. The purpose of the roadways is to serve the public good and I believe that this is the best way, in the foreseeable future, to serve that good.

]]>
https://sheepguardingllama.com/2007/04/the-future-of-transportation/feed/ 2
Do IT: Career Growth – Growing Organically https://sheepguardingllama.com/2007/04/do-it-career-growth-growing-organically/ https://sheepguardingllama.com/2007/04/do-it-career-growth-growing-organically/#respond Fri, 13 Apr 2007 22:03:18 +0000 http://www.sheepguardingllama.com/?p=1858 Continue reading "Do IT: Career Growth – Growing Organically"

]]>
I have found that there are two approaches to continuing IT career growth: Growing Organically (aka Pyramid Skills) or what I like to call “Going for the Jugular” (aka Needle Skills.) Both are viable methods for advancing your career and working deeper into the field.

Growing Organically means accumulating a broad base of knowledge covering many bases before moving on to more advanced learning within a specific area. This approach means that it will take some time before you have an opportunity to spend working with advanced skill sets but it has many advantages also. By growing your knowledge slowly and evenly you have the chance to get a solid appreciation across the IT disciplines. This is important because almost all job roles involve a tremendous amount of cross-discipline interaction and by understanding the technologies, functions, roles and challenges or different disciplines you will have more ability to work within your own as well as to work with people in other areas.

Taking the time grow your knowledge base, to ensure a thorough understanding of many discipline area basics will slow immediate career growth but provide a more powerful foundation for increased growth later in the career and provide more stability to allow for greater changes as the field continues to grow and change on its own. By accumulated a strong base of knowledge across many areas you will be much more capable of withstanding job changes, reassignments, role changes, etc. IT is a field that expects great changes over very short periods of time and specific skill knowledge is not generally as highly rewarded as broader knowledge and experience will be over time.

Taking the time to establish a firm foundation from the beginning will provide for safety as well as making continued education easier and faster in the future.

Going for the Jugular refers to specializing very easy on in a very select field of study. This could include XHTML Web Development, C++ Programming, Solaris Systems Administration, Cisco Networking, etc. This type of skill development allows for very quick development within a highly concentrated area of study and does not provide the obvious large base of knowledge that learning more organically will do.

I refer to this as developing “needle skills” because it is a slender vertically aligned knowledge set involved. Knowledge will go very deep very quickly but available job roles will be extremely specific. The obvious advantage to developing “needle skills” is that one can focus on areas of interest to the exclusion of other areas early on in the career and pay scales will increase more quickly during the early career years.

By going for the jugular there is a long term career consequence of reducing the ability to move into management as well as making only larger organizations with more specific available job roles good career targets. This cuts the potential job field almost in half in many cases.

Choosing between modes of study requires deciding between a tradeoff between short term financial gains and long term stability and greater total career growth. I recommend taking the time to become well versed in many disciplines – even those that are less exciting and interesting because the advantages of this method will become evident within very little time and can be critical during early career years when jobs are scarce and being picky about job role is not a reasonable option.

]]>
https://sheepguardingllama.com/2007/04/do-it-career-growth-growing-organically/feed/ 0
Do IT: Breaking In – My Home Network https://sheepguardingllama.com/2007/04/do-it-breaking-in-my-home-network/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-my-home-network/#comments Fri, 13 Apr 2007 21:41:05 +0000 http://www.sheepguardingllama.com/?p=1850 Continue reading "Do IT: Breaking In – My Home Network"

]]>
To illustrate through some of my own experiences I wanted to talk about the network that I built at home in the late 1990s. When I was building my own home network it was very uncommon for people to have networks at home and “home networking equipment” was not yet available so I was working with small office equipment by default. I was on a tight budget so everything that I did had to be very inexpensive and practical.

Before I started building my home network the only piece of gear that I had was a Pentium based workstation without any Ethernet card. I had to add that myself. I bought a small five way hub and mounted it in the basement of my two story (three with basement) apartment. I ran CAT3 to each floor of the apartment. I put a workstation in my bedroom on the top floor. I added a large workbench in the basement.

I hunted around and found a shop selling used Intel 386 and 486 based Compaqs. I bought several and set three up on the metal workbench in the basement. I got an early copy of Linux and installed it on all three machines. Doing a Linux install was no small feat at the time. Even doing a Windows NT install could be incredibly challenging. Having the command line only Linux machines gave me a lot of opportunity to work with UNIX at home and made learning about IP networking much easier.

At the time that I was first building my educational home network the only Internet access that I was able to get reasonably was dial-up. Getting a serial/modem router was expensive and difficult at the time so I was forced to build my own. I was studying for the Microsoft Certified Systems Engineer certification at the time and Microsoft provided free time-limited copies of the operating system for use while studying. I used Windows NT 4.0 Server and Proxy Server 2.0 to build a high performance, caching, firewall/proxy that would manage all of the Internet access for the house while the caching proxy would accelerate the Internet access.

I was fortunate that this setup provided an “always-on” Internet connection at a time when the idea of having something like that at home was almost unheard of and all of the computers were able to talk to each other very quickly. Adding the caching proxy made a very big difference for web surfing performance. This was one of the best steps that I took during this period.

To encourage the use of the network for a wider audience and to provide artificial challenges for myself I built a MUD server that run on one of the faster Linux machines and could be accessed from anywhere in the house. I managed user accounts, tuned machines, did break/fix, installed new software, programmed, etc. This network, incredibly basic by today’s standards, was quite impressive at the time and gave me a significant advantage in interviews and on my resume. I was told my interviewers at the time that my initiate and willingness to spend so much time at home working with these technologies was one of, if not the, most impressive thing on my resume.

I found it additionally helpful that I was using commercial Compaq Deskpro desktops, even if they were old, because I would often compete for desktop support contracts and having worked directly with Deskpros put me above many people competing for the same jobs who had only ever played around with consumer PCs. This was a critical lesson for me. Consumer is not the same as commercial and companies know that. Shortly after that I decided to stop using desktop machines to work with as servers – which is still common today almost a decade later. I went out and bought an old Compaq Proliant server complete with RAID controller, hot-swap SCSI drives, etc. This took my home network to a new level or seriousness and gave potential employers something to really think about. There was very little hardware that I hadn’t worked with directly and when I talked about working with Windows NT 4.0 Server they knew that I meant on server class equipment in the way that big businesses were using it.

As the cost of older equipment drops and the cost of software continues to plummet (enterprise operating systems are very inexpensive today as are many commercial database products, firewalls, etc.) enterprising young IT professionals have greater and greater expectations to live up to. Today everyone, even non-IT professionals, have extensive home networks with switched Ethernet and hardware firewalls. Consumer class servers are being introduced by Microsoft to bring obvious data management features into the home. Media centers and the associated servers for them are beginning to appear in homes now as well. Everyone has high speed, always on Internet access. Now we expect to see even more from an IT professional’s home learning network.

A home network at first should cover basic systems and networking technologies.  As the student advances the network should advance to reflect the technologies that he or she is hoping to work with in the office.  The home network should be used as a means to push forward – to gain experience in areas where the traditional workplace may not be providing enough resources or opportunities.  Even if the career stalls the educational process should not and the home network can be used to leverage that education into a form of experience that can be invaluable when that same experience cannot be gained through traditional employment.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-my-home-network/feed/ 2
Do IT: Breaking In – IT at Home https://sheepguardingllama.com/2007/04/do-it-breaking-in-it-at-home/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-it-at-home/#comments Wed, 04 Apr 2007 23:18:46 +0000 http://www.sheepguardingllama.com/?p=1847 Continue reading "Do IT: Breaking In – IT at Home"

]]>
Most professions have few elements that you can bring home. The more technical the field generally the less you can take home. But IT is one of those really cool fields where there is almost nothing that you can do in the office that you can’t do at home as well. This is good and bad. It means that to really excel you may have to bring your “work” home with you. But for an ambitious IT professional-to-be it presents itself as a unique opportunity to move ahead of your peers.

As this is a “Breaking In” article I will focus only on the areas where “Doing IT” at home applies to getting your foot into the door. Today almost everyone already has a computer at home. It wasn’t that long ago that having a computer at home was itself a differentiator for IT candidates but today the expectations for home have gone up considerably and this is expected to continue. If you are interested in the IT field you should, at a minimum, be doing as much as or more than any of your non-IT peers are doing at home. Today this can apply to entertainment as well but this is not so much a factor from an IT perspective although the more you do with your computers the more you will be comfortable with them.

The bottom line is that the more you do with computers and networks the more you are going to know and be comfortable with them. It takes a lot of time and experience before you will be ready to handle any situation and no one is completely ready. Learning to research answers to your problems online without anyone to help you, fixing hardware and software issues, troubleshooting from scratch and more are the skills that no one can teach you. You must learn these skills on your own. Doing this at home gives you an advantage that you can’t get in other ways. Take advantage of it.

Entertainment: Today “convergence” means that you can do almost everything from the Internet and personal computing platforms. This can include video games, shopping, Internet television, streaming Internet radio, podcasting, vlogging, blogging, photo hosting, etc. As more and more non-technical people begin using their computers for a wide array of entertainment purposes the knowledge base will continue to increase and being knowledgeable about a wide array of computer uses can be very helpful to a budding IT professional. This is a perfect pursuit for “leisure” time but should not be done at the expense of serious “IT at home” studies.

Networking: Almost everyone has some amount of home network whether it is a simple Internet connection or if it is a mixed Ethernet and wireless network with firewall, print server, multiple desktops and laptops of both PC and Mac variety, media centers, VoIP phones, wireless handheld video game systems, video game consoles, etc. Ordinary people are beginning to add network storage into their home networks along with other advanced features. This raises the bar significantly for someone looking to “do everything at home.” I was lucky that when I first started in IT the level to which you had to bring your work home with you was much less – but that will be saved for another article.

You can begin by choosing to work with more advanced network equipment than you would have used if you were simply maintaining a traditional home network environment. Most home networks are protected with a minimum of a “consumer” grade firewall. Now that you are working in IT it is time to move away from “consumer” technology products. For example, Netgear makes two lines of firewalls at the time of this writing: WebSafe for consumers and ProSafe for small offices. ProSafe equipment offers greater security, more features and more configuration options. Many companies have two obvious paths into their product sites – one for consumers or “home” and one for professionals or “business”. Experience with consumer products is not your goal. It is time to move on.

One of the first lessons that needs to be learned about buying technology products, even for home, is that you can almost never just run out to the store and buy the parts that you need. Most companies’ commercial product lines are available only through authorized partners and online. A few companies allow their commercial products to be sold in stores but generally these are small product lines. You will never find HP or IBM servers, Cisco enterprise routers, etc. sitting on the floor of BestBuy.

Since this is probably your first venture into serious home networking you can probably safely start fairly small. A good, quality commercial firewall. You can probably start with an “all-in-one” device with route/firewall, switch and sometimes wireless built into the base unit. I prefer separate units for learning – a firewall unit, a switch and a dedicated wireless access point. But it will depend on your focus as to where you want to spend your initial funds which are most likely limited.

Many homes today are already wired for networking but many are not. Go ahead and run CAT5e or better to every room of your house. Maybe more – your home office location will probably want several runs of cabling. Over time you will likely find yourself using several ports near any computer. In some spots you may want to consider additional, small switches to limit the cabling needs and sprawl.

Having a good, solid network is important for all of your IT studies. Almost everything that you do will be done over the network not just your “network” studies. And having a good Internet connection is, of course, essential.

Wireless has grown in importance and by working with wireless extensively at home you can get first hand, practical knowledge of the difference in wireless protocols and security standards.  Many people are using wireless today and just about everyone has major concerns about security and privacy.  By working with commercial wireless solutions you can be prepared for that task as well.

Computers: Of course you have to have a computer. But when you are working to move into the IT field you should have lots of computers. They don’t have to be cutting edge. In fact having computers from different eras (but not legacy machines) and of different types can be beneficial. You will want to have machines available at any time to rip apart, rebuild, install different software on and start all over again at a moment’s notice. Early on you will probably spend a significant amount of your time working with the physical hardware – building and modifying the computer itself. Even building a computer or a few from scratch can be fun, rewarding and educational. Picking out computer components is its own education in cost and performance factors.

Your home is your lab. This is the place where you can experiment with those things that may be embarrassing or dangerous in the workplace. At home you can push the computer to its limits until it breaks and or attempt to tune it for performance or whatever. In the office environment almost everything about a computer that makes it useful is its ability to connect to and communicate with other computers. Your home computers should be similar. This can be difficult as home networks are often single user affairs or else the others users are often only casual users but you can learn a lot by doing things all by yourself as well. It just becomes purely educational and not functional.

At home you will have the ability to learn new platforms like Windows XP Pro, Vista Business, Mac OS X and Linux. Select your targets based on your immediate needs and future goals. Not every technology is for everyone but IT tends to reward broad knowledge almost as much as deep knowledge so having worked a little on Mac or Linux even for a Windows desktop support professional might prove to be beneficial in a shop that has one or two Macs for some specific purpose but doesn’t need a serious Mac pro to support them. You never know how your knowledge might come into play in the field.

Computers can be expensive but for a lot of “learning” needs quite old computers can fill the roles quite nicely. With the availability of eBay there are many computers, network appliances and parts available that can make excellent educational tools at very reasonable prices. Good, older computers often with built in Windows licenses are available for well under $100 US.

Today virtualization technology like VMWare Server and Microsoft Virtual Server and Virtual PC have made it much easier for anyone to have multiple computers in their homes. A good, fast desktop with plenty of memory and disk space and easily virtualize several desktop and/or server machines without needing to purchase another computer. This can save space, power, time and money.

Printers: While this skill set is rapidly becoming less and less important it is still wise to be well versed in installing, configuring and sharing via the network a printer. Most people have printers attached to their computers but few people configure their computers as printer servers or use dedicated network attached printer servers to drive their shared printing needs. This can be another simple but important differentiating factor between job candidates.

Applications: You have computers in your home but what do you do with them? You learn many applications of course. The obvious choices are the Microsoft Office Suite and the OpenOffice.org Office Suite. These are the principle business productivity suite players today. Not many IT jobs actually require an in-depth knowledge of these non-IT tools but many helpdesk professionals use them daily and knowledge of common applications is always useful. Using many IT applications can only be helpful.

Servers: Now we reach the real differentiator between the advanced home computer user and the hard-core, ambitious IT professional home network – the server. Unless your target job is in programming, analysis or management having a home server can work wonders for your confidence, skills and career. The options for a home server are very wide and your choice will need to reflect your goals. If you are only using the server as a means to learn about desktop maintenance and to provide a place for backups and storage then most likely you will just want to have a single Windows Active Directory server as AD is the current most popular desktop management environment. But if you plan to go into system administration for UNIX you may want to have several Linux, BSD, Solaris, etc. servers. Some physical and some virtual.

Working hands on with real server hardware is a big deal. Just because you can virtualize doesn’t means that it is the only way to go. This is about breaking in and having hands on experience to server class hardware can be effective. Since only very serious potential IT professionals usually have real servers at home this can turn heads. Having multiple will do even more for you. This is where costs start to climb but so does career value.

Don’t start buying servers when you aren’t ready or your investment will be too early and not effective enough. Get familiar with the desktop technologies, basic networking and applications before venturing into the server space. Once you do be sure to shop long and carefully on eBay or other discount used equipment location. Servers definitely don’t need to be new. For a few hundred dollars US today you can have very good, reliable, entry level enterprise servers at home. They are often large, loud and ugly (to others) but they are things of beauty to the IT professional who sees them as opportunity and experience at their fingertips.

You can use servers at home to fill a variety of roles just like they would in the real business world. You can use them for storage, security, network authentication, application hosting, remote access, name resolution, host configuration, desktop deployment, etc. The list could go on for a very long time. Eventually your server(s) will become the heart and soul of your home network. Once you get passed these beginning phases you can start doing more exciting and useful projects with your server(s) but we will save that for another article.

When shopping for your first server look specifically for somewhat advanced servers with features like hot-swap hard drives and hardware SCSI/SAS RAID controllers. Features like this are not available on desktop class hardware and being able to work with it first hard is important. The most important feature is that the hardware be from a well known enterprise server vendor such as HP, IBM, SUN or Dell. Do not spend time with whitebox or custom built servers. Enterprises are interested in your experience with the category of equipment that they will be using.

Programming: If you are interested in getting into programming or web design then much of this is unnecessary for you. You need to spend your time writing code or producing web sites. It is far easier for these “soft” skills to be honed at home as there is practically no barrier whatsoever for someone to spend time learning and practicing. Open source projects or volunteer web site design can be perfect ways to produce real, “production” code that can be used as part of an interview process. Saying that you can write code carries little weight but producing actual code that you have written does. This lets a potential employer see what you can do firsthand. It means a lot.

Of course often programmers and web site designers need to cut their chops on some entry level jobs before really being in a position to get into the discipline of their choice. For this purpose getting some experience at home in other areas such as desktop support can be helpful to you as well.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-it-at-home/feed/ 3
Do IT: Employment vs. Contracting in the US (W2 vs. 1099) https://sheepguardingllama.com/2007/04/do-it-employment-vs-contracting-in-the-us-w2-vs-1099/ https://sheepguardingllama.com/2007/04/do-it-employment-vs-contracting-in-the-us-w2-vs-1099/#respond Wed, 04 Apr 2007 20:46:09 +0000 http://www.sheepguardingllama.com/?p=1845 Continue reading "Do IT: Employment vs. Contracting in the US (W2 vs. 1099)"

]]>
IT differs from many professions in the way that employment is handled. This is caused by many factors on both the employer and the employee side. In most careers and certainly in most jobs people work under the “employment” system. Here in the US this can be called W-2 employment because the tax form involved is called a W-2 and contracting is often referred to as 1099 employment because a 1099 tax form is used. (Much of the information in this article is specific to employment within the United States as employment laws vary from country to country.)

There are many differences between these two types of employment. Full employment under a W2 workers are protected by regular employment law. Under 1099 they are considered to be self employed and there are few protections. Under W2 full taxes are paid, as usual and as expected, by both the employer and the employee. This is the same form of payment that you would receive whether you work at McDonald’s or at IBM. Under a 1099 the employer pays NO taxes and all taxes are the responsibility of the contractor. Generally this is offset by the worker receiving higher wage rates to compensate but this must be examined carefully with an accountant as companies will often attempt to pay effectively lower rates via 1099 partially because it is harder to determine what is a lower rate and partially because the people involved are unknowledgeable about tax laws and only think in terms of hourly rates and not in tax ratios and write-offs.

Both systems have their advantages and disadvantages. W2 employees are entitled to certain benefits but 1099 workers have more flexibility. Most people, by far, prefer W2 status but in IT there are a certain number who are willing to work under 1099 and a few who prefer it. Most serious IT professionals that I have known personally over the years have worked a mixture of the two but the 1099 seems to occur most often during the earlier career years when people are more desperate for work to fill out their resumes. But with experience of the tax system and a good accountant a 1099 can be a great way to work. But it does require careful financial management and bookkeeping to make really work well and lends itself towards more ambitious professionals.

When working through a consulting or contracting firm the IT professional may be paid through either a 1099 or a W2. Be sure to always check before accepting a position – you should always specify your required rate as $x/hr on a W2 or $x/diem on a 1099, etc. Never leave the question open to debate later no matter how obvious it may seem at the time. It isn’t worth it. Often consulting firms will use a 1099 to get paid from the customer but will hire the professional on a regular W-2 creating the confusing “W-2 Contractor” position which is valuable and is just as good as any other W-2 to the professional but makes for difficult terminology.

Check with your accountant but in the US you have traditionally been allowed to do a small amount of work under a 1099 without paying any taxes at all. If you get just one small contract a year in addition to full time “normal” work you may get a nice tax break making that contract extra valuable.

]]>
https://sheepguardingllama.com/2007/04/do-it-employment-vs-contracting-in-the-us-w2-vs-1099/feed/ 0
Do IT: Breaking In – Volunteering https://sheepguardingllama.com/2007/04/do-it-breaking-in-volunteering/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-volunteering/#respond Wed, 04 Apr 2007 19:48:47 +0000 http://www.sheepguardingllama.com/?p=1844 Continue reading "Do IT: Breaking In – Volunteering"

]]>
Much like interning volunteering can be a great way to get experience in IT both early on in your career when you are having problems getting those first few jobs as well as later on when you are well established and what to be able to demonstrate breadth and community but in this article we will focus on the former. Volunteering is much like interning except you don’t have the advantage, normally, or having a mentor or a professional peer to provide you with a recommendation. Because of this a real internship is much more valuable to you professionally. But volunteering shouldn’t be overlooked for two reasons. Firstly because everything that you can do to add to your resume is important and secondly because being a good, professional citizen means giving back and starting early gets you into the swing and makes it easier to keep giving later.

Volunteering can take on a lot of forms from working with local charities, churches, private schools or other non-profits. You might have to hunt around to find organizations open to working with you and some non-profits will simply not be interested in having volunteer help (if you can believe that) but don’t be discouraged because somebody will be very excited to have you helping out.

The type of work that you are likely to do when volunteering could vary wildly. You might be helping out setting up computers much like you would with a basic desktop tech position. Or maybe you will be helping the organization get set up with their first website. Maybe a small, private school could use you to teach a computer basics class – or the same in the evenings at the community center. Maybe a local organization needs help setting up an Access database or needs OpenOffice.org installed on their computers and someone to show them how to use it.

Volunteering is likely to give you a chance to do some mostly basic work but perhaps in new and meaningful ways. By volunteering you are practically assured to be increasing the diversity of your experience portfolio. You may have a chance to learn new aspects of a technology but more likely you will get a chance to apply business rationale to the technology decision making process.

When volunteering quite often the organization that you will be working with, especially when they are quite small and you are the only “computer guy” that they have access to, will be in a position to turn to you to help with IT business decisions. These decisions are likely to be quite small but they can serve as an important tool in your own educational process. Maybe they need help deciding what small firewall product to purchase. This is your chance to carefully investigate firewall products in their price range to determine features, cost, stability, maintainability, integration, etc. Maybe you can help them choose their office suite or maybe you can do some wiring and help them set up an entire office. The opportunities are there and if you do your “job” well they might have more and more need of your services. Often non-profit organizations are limited by the availability of support that they can get. Your voluntary efforts might be a significant factor in their ability to become more technologically advanced.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-volunteering/feed/ 0
Do It: Breaking In – Interning https://sheepguardingllama.com/2007/04/do-it-breaking-in-interning/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-interning/#comments Wed, 04 Apr 2007 19:22:28 +0000 http://www.sheepguardingllama.com/?p=1843 Continue reading "Do It: Breaking In – Interning"

]]>
Often overlooked as a means of entering many industries is the practice of interning. While paid cooperative learning experiences and paid internships might relatively rare the more traditional unpaid internship is still widely available. Unpaid internships are hardly glamorous but they do offer a significant means of rapidly entering the IT profession.

One time in an interview early on in my career a technology recruiter told me that six months experience was considered to be equal to or better than a four year degree specific to the field (i.e. an IT or CIS degree and not a CS degree – unless you are doing research, of course.) Now this might be an exaggeration but in what direction? Perhaps real world experience is worth even more than that. Maybe some less but my experience agrees with that assessment. Experience beats certifications, education – anything.

Some people manage to get entry-level jobs without having to intern, get a cert, take a class or whatever. These are the lucky few. I happen to be one of these. I got a very entry level job just two weeks out of high school. I happened to have been in the amazing position of having proved that I likely possessed the skills for the job and was offered a very low-paying position that I hadn’t even applied for or knew existed. I took it and the rest is history. The job paid so little that I might as well have been an intern but it gave me real world programming experience and introduced me to large scale UNIX systems that I had never worked on previously. I did my first networking and worked with lots of hardware that I had never even seen before this job. I did this entry-level position for a year and a half. It made more difference than anything I have ever done before or since to advance my career.

That first job put a stake in the ground and declared, in writing, that I had a “start date” in the industry as well as someone to use as a reference. Even today my “career length” is still determined by that first day working in IT. It is unlikely that anything else that you do in your career, at least for the first several years or decades, will have so much effect as your start date. Everything that you do before getting your first position should be focused on getting that first position. Once you get that entry level job, no matter how mundane (but beyond working at Circuit City,) you will be amassing “experience” that will add to your total from there on out.

One of the great advantages to interning, paid or unpaid, is that because of your incredibly low cost and obvious ambition you have a better chance of being allowed to work with technologies that you might be barred from otherwise due to your lack of experience. And your boss will probably love you because you are costing him or her next to nothing. It isn’t hard to get an incredible return on investment under those conditions.

Interning is not designed to be a means for gaining gainful employment with the company that you are interning with but, obviously, that is a possibility. Do a great job as an intern and the company is very unlikely to turn around and give their next job that you can handle to an unknown entity when they have someone that they know right there. But this is not always the case.

Interning is not meant to last forever. Six to nine months is usual enough. A year isn’t out of the question. Interning is my personal recommendation for anyone who is starting their career during their normal “college years” and has the advantage of living at home with the folks. If you are older it might not be something that you can reasonably do. If you are really motivated or lucky you can often get into a good internship during your high school years. This is the optimum solution. You can often walk out of high school and right into the field. Or even get work before then. It is rare but it happens.

If you intern for too long the benefits will start to go away. You can only work for free for so long before it becomes a problem. I suggest looking for a paying gig starting somewhere in the six to nine month range of your internship. It may take a while for the right position to open up. Interning is perfect because the company that you are at can’t complain about you heading off to interviews.

While interning you shouldn’t be kicking back and taking it easy figuring that you are earning it because you are not getting paid. What you should be doing during this time is working on certifications. Even if you just get one or two during this time it shows a lot more ambition than just interning alone and it provides more material for your resume which is critical at this early stage.

As an intern you should act as much as possible like a professional employee. This is your chance to learn how to be a professional without the pressure. Take advantage of it. Do the best work that you can do. Show up early and work late. Work hard, do your best, take time at home to study the technologies that they are using in the office and be persistent in asking to be allowed to work on more and more advanced projects once you have proven yourself on more menial tasks.

Chances are if you are able to seriously consider interning you are either one of those amazing people who doesn’t need to sleep or else you are young and living with family and have few or no bills that you have to take care of yourself. If you or your supporter(s) argue that interning is a waste of time, that no one should work for free and that college or university is a better use of your time and money then consider this:

Interning can begin during high school or, if not, as early as being immediately out of high school. This gives ambitious interns months or potentially years of a lead on their college-bound peers and their lead puts the proverbial stake in the ground showing the beginning of their careers. College does not do this.

A four year college student who waits until after graduation to pick up their first IT job could be five or six years behind their peer who left high school to take an unpaid internship. That former intern could potentially be well situated in a lower mid-career position before the college student starts looking for their chance to “break in.” That lead is very tough if not impossible to overcome.

Additionally the college student probably has debt. A lot of it. Racked up from years of not working and spending like crazy. Most colleges are very expensive and most require that you spend a lot of money on dorm rooms and activity feeds. Not only has the intern way ahead in debt load but has probably been making positive cash flow for almost the entire time that the college student was in negative cash flow.

Now the obvious retort is that the college student has some level of education that is so valuable that he or she will instantly be able to do more tasks and advance their career faster than the former intern. Perhaps. I will talk about that issue in another article. But assuming that the educational advantage is real lets look at the equation again.

The college first person has a four year university degree and finds and decent, entry-level job right out of college. Life is good. Degree under their belt and the first job underway. The former intern has four or five years of experience under their belt and no degree at all. We will assume that both of these potential professionals have an equal number of certifications and other factors are generally comparable. At this stage the former intern has the massive career and financial advantage. It will take two to five years for the college graduate with the same skill and drive to likely approach the interns career potential at this point in their career. That is a long time.

As we move into the future, let’s say another five year, we see the college student now has five years of industry experience and is now mid-career. During the past five years the former intern, being a mid-career professional, was given the benefit of getting to go to university as part of their pay package. Educational benefits have a tax advantage for both parties and many companies will pay some or all of college education and often other types of education. So after ten years the college first professional has five years experience, a college degree that is very out of date and a large debt load to show for it. The intern has ten years of experience, a more recent degree and no collegiate debt.

The bottom line is that college is a huge risk. It is a gamble. In many industries college level work is required to gain entrance but in IT it is more likely to be a barrier. The risks associated with foregoing real experience to spend time in college are very high. College isn’t the “safe” route that it is with other industries. Often IT professionals are more likely to look at “dedicated” time spent in college as party time as so many professionals did their degrees while working. IT is not other fields and people going into IT should think carefully about how taking the “safe” route will affect them in the long run. A four year degree is very likely to be enough to get you an entry level position but for an ambitious, career-minded IT professional it can be a stumbling block that can have ramifications that will stay with you for the rest of your life.

Additionally, an the intern had a safety, a fall back, all along. If, at any time, they were to be in a position where they were unable to find a position whether due to a contract ending or being laid off or whatever they could simply enter college at that point. Take classes until another position came along and then switch back to working using college as something to fill the gaps. Or they could do college at a slower pace doing evening or distance classes which are generally geared more towards motivated professionals and not full time non-professional college students.

One of the big mistakes that people often make when considering a career in IT or soliciting advice about moving into the career is to look at IT as if it was any other professional domain. But IT is very unique. It is larger than other fields and has more of an employment gap. It is a constantly changing field where a college student going to school for four years is likely to have the knowledge learned in their freshman year be nearly useless by the time that they graduate. This doesn’t happen to engineers. This doesn’t happen to teachers. This doesn’t happen to chemists, to pharmacists or lawyers. All of those fields change but IT changes at a pace that other industries cannot even imagine and it is likely to stay that way. IT is broader than other disciplines. IT is different. Accept it. Embrace it. It is what makes IT so great but don’t be fooled because what worked for your cousin to get that job as in insurance is not going to get you into IT.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-interning/feed/ 1
Do IT: Breaking In – Friends and Family https://sheepguardingllama.com/2007/04/do-it-breaking-in-friends-and-family/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-friends-and-family/#respond Tue, 03 Apr 2007 20:53:36 +0000 http://www.sheepguardingllama.com/?p=1842 Continue reading "Do IT: Breaking In – Friends and Family"

]]>
Nothing makes you better and doing something than actually doing it.  Hence the expression “practice makes perfect.”  Hands on, real world experience is the best teacher.  Getting that experience isn’t always so easy but in IT we have more opportunity to get experience earlier than in almost any other field.

Almost everyone uses computers today including members of your family and your friends.  Chances are that many of them will need help with their computers from time to time and this represents an opportunity for you to demonstrate your abilities with computers and customer service.

Dealing with family computer problems will not exactly prepare you for the issues that you are going to face in the business realm but it will give you a pretty good idea of what it is like dealing with “users” and some of the problems that they usually face.  Business users will have different problems including complex networking issues involving interfacing with active directory and storage servers.  Home users are more likely to encounter complex hardware issues involving non-standard peripherals that they want to attach to their computer and are far more likely to be dealing with virus and malware.

Helping home users keep their computers humming along, doing operating system reinstallations, setting up computers, teaching family members how to use their applications, etc. will help you be prepared for real world problems.  Even though helping friends and family may not be the optimum learning situation it is an opportunity that you will almost always have at your disposal and it should not be overlooked.  Working with home users will also give you a broader scope of hardware and software configurations that you will have run across.  But keep in mind that the software and hardware that you encounter in homes is seldom the same as what you will find in business.

Home users are increasingly using network devices such as routers and firewalls, switches, wireless, print servers, media servers and even network attached storage.  Networking home computers, setting up anti-virus packages, doing tuning and more can contribute to a well-rounded desktop support education.  Be sure to focus on security as well.  Home users need security as well.

]]>
https://sheepguardingllama.com/2007/04/do-it-breaking-in-friends-and-family/feed/ 0
Do IT: Breaking In – Certifications https://sheepguardingllama.com/2007/04/do-it-breaking-in-certifications/ https://sheepguardingllama.com/2007/04/do-it-breaking-in-certifications/#respond Tue, 03 Apr 2007 18:23:27 +0000 http://www.sheepguardingllama.com/?p=1840 Continue reading "Do IT: Breaking In – Certifications"

]]>
Information Technology as a field offers a number of different paths that can be used to gain entrance into the field for beginners. In the late 1990s and early 2000s the most prevalent and popular path was through the use of industry certifications. Since the early 2000s the popularity of certifications has been decreasing as the tests are generally becoming easier and systems for “gaming” the test and even outright cheating have become common.

This is not to imply that certifications do not have their place. They still show initiative and other systems of showing competence can also be gamed or faked so certifications have their place. Over time industry certifications are likely to find a reasonable middle ground of usefulness without the unnecessary hype of 1999.

Certifications have the benefit of being able to cover very specific ground and can have value that few other resources can offer. Certifications range from simpler, single test based certifications that are designed to show knowledge of a single technology or large, in depth, multi-exam monsters designed to show knowledge in a specific family of technologies at a level unheard of in even the most demanding collegiate circles. Generally certifications are most valuable in general technology areas for people early in their careers to use as “foot in the door” tools or later on for mid-career professions to demonstrate in-depth knowledge of a specific skill that may be difficult to represent in any other way.

In this article we are only looking at using certifications as a means of breaking into the IT industry. Getting that first job can be difficult. Often, once the call is rolling, finding more IT work is easy. Each subsequent position is easier to find than the last. But the first one or two can be very difficult indeed and every tool at your disposal should be used. Certificates are one of the best tools. In fact, I would be very reticent to hire anyone without experience who has not taken the time and effort to get at least one or two certifications under their belt.

CompTIA A+: The most common “beginning” certification known widely in the industry is the A+ offered by CompTIA. The A+ is a longstanding cert and is designed to test the knowledge of a desktop technician supposedly at the level that should be obtained after the first six months of experience. In reality few companies would want to hire someone without the level of knowledge tested for in this exam. The biggest difficulty with the CompTIA A+ (and continuing on with later CompTIA certifications) is that the test is generally horribly out of date, based on a set of technology that only applies to Windows desktop support and often the questions of outright incorrect. People studying for the A+ must study from actual A+ materials as they will be stuck memorizing many CompTIA specific facts that must be forgotten as soon as the test is completed as they are either wrong, useless or irrelevant.

As much as the A+ is poor it has become the de facto standard certification for entering the industry. The theoretical purpose of the test, to examine basic desktop class hardware and software skills, is good and anyone working in the industry or even near the industry should have a good grasp of these everyday skills – even programmers and managers. But since the test is based on so much archaic knowledge and non-commercial grade systems it does not actually test the knowledge base that it would portend to. Often the material on the test is so old that no one with the first three or four years of their careers, even in the largest IT shops, would ever have had even the remotest access to some of the ancient systems that the test is based on. In Information Technology there is no room for people and certainly not tests that cannot keep up. But most of this knowledge can be memorized easily and once you are through the A+ test you can move on to bigger, better and more useful things.

Popular certifications following the A+ (it is almost always advisable to focus on getting the A+ over and out of the way as early as possible) include the CompTIA Network+, the CompTIA Server+ and the Microsoft desktop exam of the day. We will look at each of these certifications in turn.

CompTIA Network+: The Network+ is designed to be based on the expected knowledge of a technician with two years of industry experience. The exam is based solely on computer communications and networking. It is a broad and general test and, in my opinion, it is the most valuable test that CompTIA offers. The knowledge that is tested on the Network+ is knowledge that is useful to people in any IT field and I would love to see everyone taking this exam.

Unlike the A+ which is full of outdated and worthless knowledge my experience with the Network+ is that the subject material is much better though out and mostly relevant to the real world. In the process of studying for the Network+ it would be advisable to spend a good amount of time becoming very familiar with the subject matter as it will be useful again and again throughout your IT career. Often the Network+ is a “growth” certification and not a “foot in the door” cert but it can work wonders for someone trying to get off the ground who hasn’t found that first real position yet.

CompTIA Server+: The Server+ is not an exam for everyone. Programmers, Analysts and others may find the subject matter almost completely outside of their discipline and not useful to them. But for anyone looking to a career in the hardware areas or systems administration the Server+ can be quite useful.

The Server+ is designed to be roughly of the same “level” as the Network+ and picks up where the A+ hardware section leaves off. Instead of focusing on desktops and laptops the Server+, as its name suggests, spends it time looking at server class hardware tackling storage issues, redundancy and rack mounting among other issues. The Server+ also touches, just slightly, on server operating systems as a server technician will need, from time to time, to be able to access the systems themselves and not just the hardware that they run on.

Microsoft Desktop Support Exams: Microsoft offers a new professional certification exam with every major operating system release. At the time of the this writing Microsoft offers certifications for Windows 2000 Professional and Windows XP Professional – Vista certification is expected to be available very soon. In fact, they offer a second, more advance Windows XP exam for people who are interested in going further down that path. Since almost all desktop support personnel are involved in supporting primary if not exclusively Microsoft enterprise desktops this certification can be a real differentiator between candidates.

The Microsoft exams are very closely focused on the knowledge and skills that are needed for serious desktop support professionals to do their jobs efficiently. The Microsoft exams are extremely well written and are clearly peer reviewed extensively. Microsoft takes their certification process very seriously and their exams reflect this. It is a pleasure taking a Microsoft exam. In all of my exam taking experience while others, notably CompTIA’s, exams are loaded with poorly worded questions that have no actually correct answer possible the Microsoft exams have been flawless with every question, regardless of how difficult it was, clearly having a correct answer even when I did not know what it was. You never get the impression that you know more about the product than the test writers do when taking a Microsoft exam.

The Microsoft desktop support exams cover a lot of knowledge areas and are fairly challenging. But they are very valuable and can do wonders for the ol’ resume. Once you have the basics out of the way having a good, solid Microsoft exam or two under your belt can be just what you need to get into that first position or to advance on to your second.

Current Microsoft exams targeting the desktop include:

Windows Vista and 2007 Microsoft Office System Desktops, Deploying and Maintaining
Windows Vista Configuration
Installing, Configuring, and Administering Microsoft Windows XP Professional
Installing, Configuring, and Administering Microsoft Windows 2000 Professional

Of course, newer exams are more useful than older exams. By the time that you spend a few months preparing for an exam the focus will increasingly shift towards newer technologies so even if Windows XP offers the greatest installation base and demand in business when your studies begin Vista is much more likely to be valuable to you near the start of your career and will be increasingly so until another operating system replaces it.

Microsoft exams of this nature also have the very nice advantage of being part of the learning path towards larger and more difficult composite certifications from Microsoft such as the MCSA, MCSE and MCDST.

The Microsoft Certified Desktop Support Professional, or MCDST, was a two test composite certification (two Windows XP stand alone certifications) that demonstrated a real commitment to Windows XP support for the desktop. By taking each of the underlying exams you would gain a standalone Microsoft Certified Professional certification to put on your resume and with the completion of the second you would also achieve your MCDST status. Three resume “lines” for the price of two. A great value indeed.

With Microsoft Vista the certification structure has changed and the MCDST has been replaced with the MCITP or Microsoft Certified IT Professional: Enterprise Support Technician. The new structure is very confusing, unfortunately. It makes it much more difficult for aspiring IT professionals to be able to definitively know what certification paths will be most valuable to them. But it does allow for great levels of differentiation of a company takes the time to learn the meanings of the myriad certifications. The new MCITP still requires two exams but they are different exams than previously required and Microsoft’s current web site should be consulted.

]]> https://sheepguardingllama.com/2007/04/do-it-breaking-in-certifications/feed/ 0 Do IT: Information Technology Career Paths https://sheepguardingllama.com/2007/04/do-it-information-technology-career-paths/ https://sheepguardingllama.com/2007/04/do-it-information-technology-career-paths/#respond Mon, 02 Apr 2007 22:22:43 +0000 http://www.sheepguardingllama.com/?p=1838 Continue reading "Do IT: Information Technology Career Paths"

]]>
Information Technology is a very large field but within the field there are common job categories and career paths. Over time new careers appear and a few old careers fall away and within broad career paths there are many areas of specialization. This article’s focus is to look at the large, broad categories to give new IT professionals or IT hopefuls a basic grasp of options within the field.

The categories here are separated by duty and represent the basic building blocks of the IT professions and disciplines. By no means is this meant to be representative of all job roles and career paths available to an aspiring IT professional or hobbyist but to provide some structure to make target careers less ephemeral. In the real world few, if any, IT professionals do the work of only a single job role without venturing outside its strict boundaries if such boundaries can even be argued to exist. In extremely large IT departments (those over 10,000 IT professionals) will often stick very strictly to descriptions such as these but small departments (say of only 10 IT professionals) may lump almost all skills into just two or three overarching job descriptions.

Programming: Of all professional areas within Information Technology the area of programming is surely the most well known to people outside of the profession. Programmers can work in numerous different technology areas, specialize in many different ways and can work with many different languages and platforms. Programming is often the area of IT that draws people into the field. Programming, more than any other IT discipline, is easy for people to begin learning early and is very accessible.

Programming, or coding, involves the writing of computer programs which can range quite significantly and job titles vary dramatically as the job descriptions begin to differ. Beginning professionals on the programming path are often just termed “programmers” and can expect to do programming projects that involve tiny pieces of larger systems. Programmers are almost always working on teams of programmers but can potentially live very solitary existences if such is desired. Programming professions allow for a very wide array of working environments. Programmers are more well suited to working flexible hours and from remote locations or “work from home” due to the nature of many programming projects.

As programmers progress along their career paths they can move up to positions like software developer, software engineer and software architect. Specialization within the programming realm can include system programming (working close to the hardware – highly technical), user interface programming (working closer to the end user experience – generally less technical and more creative and involved in the “human element”), database programmer, web application programmer, etc. Programming fields lend themselves to crossing into software design and management roles as well.

Systems Analysis and Design: Programmers may write software but systems analysts design it. Often the two roles are combined in what is called a “programmer analyst” as the roles are so closely identified. A systems analyst’s role is to define requirements and high level design for an application or program. Programmers are responsible with the low level design. A good analyst will have a very good understanding of programming, developer’s tools, architecture and more. It is a broad discipline that often involves a lot of customer or client interaction and the ability to translate requirements from clients outside of the IT field into useful requirements for design and for the programmers.

Systems Analysis is almost a management discipline and analysts will often cross that boundary many times during their careers. It is an exceedingly creative part of the IT field requiring a lot of critical thinking and “outside the box” contemplation skills.

Project Management: Any area of IT can have project management involved with it but this almost always applies to software project management or system project management. IT departments that include any number or programmers and possibly analysts will logically be charged with developing software. Project Managers oversee this process. Technical project management is generally closer to being a management discipline than an IT discipline but many PMs are highly technical and come from the core IT ranks as IT project management is so varied and different from project management in other areas such as engineering.

Hardware Support: Hardware support comes in two basic flavours – desktop hardware (which includes laptops and other commodity end user items) and server or datacenter support. Hardware techs range from consumer desktop support personnel that you will encounter at stores like CompUSA and BestBuy to server technicians working in the datacenters working with multi-million dollar hardware. the range is rather broad. Because desktop hardware has become so commonly known the “computer store” techs are generally not considered to be IT professionals any more than a car salesman would be considered a mechanic or a car designer. Sometimes a store tech job can provide leverage into the field but generally this is not the case. The technologies that are used in consumer PCs is enough different from enterprise business systems that the skills are generally not useful across the divide.

Some large companies maintain a staff of hardware technicians who work on desktop and laptop level hardware. Desktop class technicians are so identified with the CompTIA A+ certification that often times these job roles may be termed “A+ Techs”. This is generally a path towards the server technician positions. Server technicians need to be familiar with much more complicated and varied hardware and often work in large datacenters where there is little or no direct customer interactions. While desktop techs often interact to some degree with end users and desktop support technicians, server technicians generally interface only with systems administrators.

Networking: One of the core skill areas in IT is networking and communications. Networking is a relatively new discipline within the industry as computers used to exist primarily as stand-alone devices whether in homes or in business. But over the last few decades the idea of computers that are not a part of a larger network has gone from commonplace to practically unthinkable. Today even the most basic home computer is purchased to be an Internet connection node and not for the innate capabilities of the computer itself. Because of this networking has exploded into a very large, core discipline needing many qualified professional to fill out its ranks. Networking jobs generally fall into a few basic categories: network technician, network administrator and network engineer.

As you can guess from the job role names a technician’s general role is to deal with mostly “field” networking issues which often involves a lot of leg work, is more likely than other positions to place you in a remote office and often involves working with smaller categories of networking equipment but it is a stepping stone to high level networking positions. The network administrator is the position responsible for managing and running the day to day operations of the corporate network. Generally the network administrator is the last word in the company’s network operations. This can be a very senior position and while the job titles are few the discipline’s long term career growth is solid. A network engineer’s job would be to design a network. Often administrators and engineers are the same people but in large companies these roles are separated with engineers generally having a broader knowledge of network solutions and vendors and administrators having a more thorough knowledge of low level tuning and configuration of the equipment used at that time.

Systems: Possibly the largest of all IT disciplines is that of systems. The concept of systems is so large that it is difficult to define in any meaningful way and is often conceptualized as several sub-disciplines to make it easier to quantify. The basics are that the “systems discipline” involves any basic management of computer “systems”. This can mean management of end-user resources like desktops, laptops, PDAs, etc. as well as shared resources like servers. Generally a “systems” professional will work primarily with the computer’s operating system but this qualification is hazy at best. Any real work systems professional will have much overlap with other areas but core functionalities are generally more well defined.

Desktop and Deskside Support: The most common sub-discipline within systems is desktop support. This role is very difficult to separate from that of “Helpdesk” although the later is less of a distinct discipline but more of a delivery method of support. Most businesses separate helpdesk into a unified function that crosses many discipline boundaries.

Desktop Support involves the direct management of personal computers whether they are Windows, Mac, Linux, etc. A desktop technician will often work either directly with an end-users workstation or remotely via remote control technologies to help keep workstation resources working correctly, adding new software, etc. Desktop administration often deals with large numbers of desktop resources and generally handles password and account issues, large scale desktop changes, migrations, etc.

While desktop job descriptions are generally rather lean the field is actually extremely large. Almost every business requires a regular host of support personnel to keep the non-IT staff working on a day to day basis and will additionally utilize contract desktop support staff to augment internal resources as often “project” work will require far more people than a company can normally keep on staff. Many IT professionals who intend to enter almost any of the other disciplines will start their careers in the desktop support realm as it has the lowest “barrier to entry” into the field. But don’t be fooled. Just because it is easy to get into the beginning desktop support ranks there are many long term career opportunities within this field as well. Many professionals have long and rewarding careers without even leaving the desktop support arena.

In some large organizations there might even be a dedicated desktop engineering role specifically for those function of designing the operating system and application profile for corporate desktops. This position is almost always included in other job roles but can, potentially, exist on its own.

Server Administration and Engineering: The most visible and well known career path under the systems umbrella is that of server administration and its nature sibling, server engineering. These roles are so common that almost all businesses simply refer to them as system administration and system engineering.

Server support roles are involved with the designing, building (from a software perspective,) securing, deploying and managing the server resources of an organization. These servers come in a wide variety of types from Windows, Linux and Netware operating systems to application, database, web and email functions. Server support is a very large job role category that often spans entire careers from intern to retirement. This is one of the largest senior level career categories and is often a “target” career for people entering the IT field.

Pure server support roles as can be found in very large companies may be very strictly limited to supporting just the operating system and core functionality of a server. More often server system administrators will be involved in the running of extended functionality such as email, web, database and other software that is tied to the server.

Application Support: In large organizations when the system support role is strictly limited to the server’s operating system you will find dedicated application support personnel who generally specialize in a single application (such as Microsoft Exchange) or in a category of applications (such as Email) or in a suite of applications (such as Microsoft BackOffice including applications like Exchange, SharePoint, LiveCommunications Server, Project Server, etc.) More often you will find mixed server and application specialist who specialize on a particular platform and application combination such as iPlanet on Solaris or Apache on Linux, etc. Management Information Systems applications such as Enterprise Resource Planning (ERP) or Customer Resource Management (CRM) are common dedicated application areas as well.

Many companies have myriad internal applications that have been developed or customized either in house or through a consulting agreement and are considered to be a competitive advantage for the organization. These unique applications often require support as would any commercially purchased off the shelf application. In addition to the obvious role of application administration the role of application support is also common in large corporate entities. This is often called “operations” as this role functions almost as an organizational nerve center.

Database Administration: Known as a DBA, a database administrator is a special category of application administrator that is dedicated to the database technologies. Databases (such as Microsoft SQL Server, Oracle, Sybase, MySQL, PostgreSQL, etc.) are such a critical, popular, important and unique application that the field is considered to be its own area. There are many skills unique to the DBA profession that are not used or not widely used outside of database administration.

Database Designer: In a role somewhat related to both systems analysis and programming is that of database designer. A database designer’s job role is to work with application designers and analysts to design the database portion of an application. Databases are extremely complex types of software that generally require careful management and tuning and individual databases require detailed design which can be a significant portion of the design of an application.

Web Designer: Unlike the web application developer which is a popular programming job role a web designer fulfills the very popular function of designing web pages themselves. This is often considered to be a fringe IT job role because it is equally related to publishing, marketing and other, non-IT disciplines but because a truly qualified web designer needs a to be very skilled technically it is, in my opinion, a true IT discipline. Web designers get probably more opportunity for artistic creativity than any other IT activity. Often web designers will slowly more into programming to enhance their skill sets and will begin to become user interface specialized web application developers. But the leap from non-programming web design to web application development is a large one not to be undertaken casually. It is truly a change of discipline but between two disciplines that are closely tied together. Web design is by far the most prominent IT discipline to make use of traditional artistic abilities.

Security: While almost every job role needs to make security a part of their own discipline the enterprise has a place for overarching security personnel as well. IT, because of its ties to the company’s most valuable non-people assets – data, is integrally tied to security. The role permeates the field and is broad in its implications. Security professionals must be aware of everything from physical security, system security, network security, database security, programming methods, etc. In today’s IT professional realm security has become an extremely hot topic and it is very likely to remain so indefinitely.

Help Desk: This task is often placed in its own category because of the nature of the position. Help desk generally refers to the job role of the technical support call center. Help desk roles generally range from customer application support to remote desktop support. A help desk and an operations center will often be paired together or combined into one entity. Using current remote desktop management technologies such as RDP the modern helpdesk has taken on many of the job functions previously covered by deskside support. As networks become more stable and power powerful and as desktop management becomes more ubiquitous and far reaching the abilities for the help desk to cover most day to day support functions increases. Often the helpdesk is used as an aggregation resource to provide a single point of contact for any needs originating from an non-IT end user.

LAN Administration: This mostly deprecated term was once popular for referring to the small and medium business combined job role of deskside and server administrator along with network technician. LAN Administrators were often required to be “jack of all trades” functioning as a single point of resolution for all “computer” problems in small businesses. Often this meant constant trips to user’s desks and wrangling with tiny mixed user server and network closets. As IT advances this role is becoming less popular but is likely to continue in smaller companies for some time. The term LAN refers to the “Local Area Network” and was meant to suggest that the administrator was responsible for all machines connected to and including the office’s network. LAN Administrators also have a tendency to occur at remote, branch office locations where a single person can satisfy almost all local IT needs and additional needs can be handled via helpdesk or remote administration.

Storage Administration: One of the newest professional areas now widely available as a specialty within IT is that of storage management.  Over the last several years new and highly specific storage technologies have emerged and have become a mainstay in the corporate technology environment.  These technologies are, to some degree, unique to storage dealing with large and fast storage hardware as well as network technologies adapted for dedicated use in the storage space.  Storage results in generally being a blend of systems, networking and a little server level hardware support.  This is a young and growing area within IT but definitely here to stay.

]]>
https://sheepguardingllama.com/2007/04/do-it-information-technology-career-paths/feed/ 0
Do IT: Introduction to the Information Technology Industry https://sheepguardingllama.com/2007/04/do-it-introduction-to-the-information-technology-industry/ https://sheepguardingllama.com/2007/04/do-it-introduction-to-the-information-technology-industry/#respond Mon, 02 Apr 2007 18:57:13 +0000 http://www.sheepguardingllama.com/?p=1837 Continue reading "Do IT: Introduction to the Information Technology Industry"

]]>
Information Technology, IT, is one of the most dynamic, varied and interesting career choices available today. IT is about much more than simply “working with computers”. IT is a career in “change management.” Everything about the IT industry involves constant change making every day hold the potential for something new and exciting. The industry is young – constantly evolving and reinventing itself.

IT offers a variety of career options ranging from the “hands on” paths including deskside support and server technician, customer service related tasks such as operations and help desk, technical support roles like network and systems administration, engineering positions like network and system engineering, software developing roles from web applications to system programming, creative roles like web design, etc. IT is generally highly technical while being infused with opportunities for social interactions and a high degree of creativity and critical thinking.

Because of the incredible variety that exists in the IT field it creates possibilities for working in a wide range of tasks that allow for career growth and advancement while avoiding boredom and stagnation. IT is an ideal career path for people who want to constantly strive to better themselves and are highly self motivated. IT is an extremely large field unto itself and is involved with all other industries which creates unique, blended, industry specific IT career paths in addition to pure IT disciplines widening the field even further. Popular specific blended IT career paths include hospital and medical care, IT management, financial and banking, security, government, sales and marketing, engineering and CAD, etc.

Information Technology is exciting and diverse. It is a growing field offering new jobs year after year. Currently there is a global need for qualified IT professionals and in the United States there is a significant shortage of candidates. IT offers opportunities through technical and management paths and many long term career goals. IT provides variety and endless challenge. IT, through its nature state of constant change, makes for continuing excitement.

More articles on the IT industry coming soon.

]]>
https://sheepguardingllama.com/2007/04/do-it-introduction-to-the-information-technology-industry/feed/ 0
Women in IT: Barrier to Entry https://sheepguardingllama.com/2007/03/women-in-it-barrier-to-entry/ https://sheepguardingllama.com/2007/03/women-in-it-barrier-to-entry/#respond Tue, 27 Mar 2007 22:23:22 +0000 http://www.sheepguardingllama.com/?p=1826 Continue reading "Women in IT: Barrier to Entry"

]]>
I was reading an article in InfoWorld today talking about the low numbers of women in IT. Anyone who has worked in the field and most who have not know that the IT industry is practically devoid of women. In fact I was surprised that women make up almost 25% of the field. In my experience the number is dramatically lower. I wonder if to make the numbers seem so high they are taking only a subset of the field or perhaps including some pretty far reaching support personnel. My personal experience across many vertical industries and in companies of many sizes and geographic locations is that women represent no more than 10-15% of the field. Although recently I have begun to see this number rising but only through my increased interaction with IT professionals in Europe.

Articles abound discussing why women are not being encouraged to enter IT or why so many women are now exiting the field but I want to discuss a particular area in which, I believe, women are being hampered from entering deeply into the IT workforce but that is very often overlooked – the physical asset management career phase.

In almost any IT professional’s career, especially one who takes the fast track and wishes to start working in IT from a young age and looks to get experience possibly during high school, instead of college or coinciding with college is often tasked with working in an extremely physical environment. Whether you are talking about that first job placing monitors on desks and crawling on the floor to plug in desktops or if you are racking and stacking servers in the datacenter – the first several years for the average IT professional entering the field is likely to be very physical. The facts are that the equipment involved in the IT industry is, on average, quite heavy but most jobs remain closely tied to the hardware and going through the hardware management stage is critical to most IT job paths. There is a reason why the CompTIA A+ exam is expected for almost any IT professional in her first several years of employment.

Working with the physical hardware has a lot of advantages. Knowing intimately how a server goes together or what types of racks use what hardware or how many hard drives fit into a chassis can be important even when reaching into high IT ranks. Of course this knowledge can be gained through study instead of first hand knowledge but this is much more difficult and the results are not the same. In an interview I can state that I have first hand working knowledge of myriad hardware platforms. Even now with over a dozen years of experience in the field it still comes into play in almost any interview or discussion. The ability to lift a Compaq Proliant 6500 or a 2200VAC UPS unit were major factors for me getting work at one point. They allowed me to do tasks without assistance and to take jobs that may not have been available to someone with left lifting power.

I once worked a desktop support job that involved moving eighty-five twenty-one inch Sony CRT monitors along with their desktop counterparts. They had to be moved from the back of a tractor trailer and brought into an office building and placed on desks all over the office. They had to be unboxed and hooked up. It was an entire evening for the crew spent just doing heavy lifting. It wasn’t the part of the job that we were getting paid for but the company didn’t want to hire a separate moving crew just to move some computers so they paid us to do it. But even the crew of almost all early twenty-something men were completely spent by the end of the evening. It was a grueling task and the job barely allowed enough time to get home, sleep and return before more work had to be done. Work like this can be instrumental in getting one’s foot in the door of the industry.

Today desktops are becoming smaller and the switch from CRT monitors to LCD has helped reduce the size and weight of desktop computers immensely. More computer users have chosen laptops which makes the job even easier yet. But currently these weight reductions only affect the PC support role jobs which are generally at the beginning of most IT professionals’ careers. These are gateway positions – important in teaching scope and breadth to up and coming IT workers but seldom a target or stopping point on the career path. It is not uncommon for these jobs to become dead-end jobs for those unable to make the next logical stop – the datacenter.

In the datacenter the equipment that is dealt with every day is very heavy and cumbersome. Equipment ranges from back-breaking 4U rack mount servers to fork-lift only cabinets. Heavy floor panels with razor sharp edges are often moved routinely to gain access to under-floor cables. In large datacenters servers may be rack and unracked daily. Heaving lifting is and will continue to be a core function of datacenter work for some time to come.

Many women are not capable of physical datacenter work and far fewer would want to do it whether or not they were able. Very few men look forward to racking servers – it just isn’t pleasant. But the server technician step can be a critical step on the IT ladder. It gives desktop support personnel a direct link between desktop support and system administration. For people looking for something similar to desktop support but more technical and challenging it can make a more attractive career target. It gives IT professionals hands on training in the equipment that they will be making decisions about later and a more clear understanding of the limitations and capabilities of the machinery. As humans we learn best by doing and leaving a piece of the chain a mystery makes it seem more difficult and complicated than it really is.

Almost everyone that I know in IT has either spent time working in a datacenter or intends to do so at some point. Only career programmers tend to avoid this step and generally only those who spend long years in college to get around it. Many programmers go through the server tech stage as a means of fast-tracking their careers and broadening their horizons.

I don’t have a useful solution for the industry. Right now we are affected by a multitude of problems that seem superficial on the surface but may be having a dramatic impact on the industry’s ability to attract and retain a female workforce. As time moves on desktops will continue to be reduced in weight and the Deskside Support roles will become less physically demanding. Eventually the datacenter’s mainstay equipment will weight less that eighty pounds and when it does many more people will be in a position to work in that environment. But for now we are challenged by an industry so broad and so complicated that senior IT managers, system architects, engineers, administrators, etc. are all expected to have paid their dues at some point in a demanding physical equipment environment. This is not to say that there are no means of reaching the higher echelons of the industry without having worked in the datacenter. Not at all. But the reality remains that there are vastly more opportunities for entry into the field and for early rungs on the ladder for people capable and willing to take on unpleasant and physically challenging positions.

]]>
https://sheepguardingllama.com/2007/03/women-in-it-barrier-to-entry/feed/ 0
And the winner is…. BluRay https://sheepguardingllama.com/2007/03/and-the-winner-is-bluray/ https://sheepguardingllama.com/2007/03/and-the-winner-is-bluray/#comments Tue, 27 Mar 2007 18:46:55 +0000 http://www.sheepguardingllama.com/?p=1825 Continue reading "And the winner is…. BluRay"

]]>
People like to complain about the format war and how it will negatively impact everyone we they search for the “next generation” video format.  Well, I have two things to tell you.

One: BluRay started in the lead and was the only format with the headroom to handle current video technology let alone future video technology.  BD was such a no-brainer than it was hard to believe that anyone was seriously considering HD-DVD.  Even BluRay isn’t a very impressive format for what we need today but HD-DVD completely misses the mark.  The public has moved passed HD-DVD without blinking.  BD has very short to live.

Two: The age of physical media for content delivery is all but over.  Sure with the advent of 1080p video and lossless eight channel audio suddenly downloading content is too much for the average consumer but this is just a temporary swing.  When DVD first released the thought of downloading a whole DVD from the Internet was the stuff of science fiction.  Sure we could hypothesis about it and we knew that someday it would happen but actually doing it seemed a lot way off.  By halfway through the DVD lifespan Internet connection speeds and backbone capacity had increased so dramatically that consumers in most markets can download an entire DVD image or video files of similar quality in minutes.  Minutes!  My cheap, bottom of the line Internet connection in Newark will grab a DVD in about eight minutes.  Imagine what people in markets with really high speed connections can do!  Already there are a lot of services allowing you to download “rental” movies and some places allowing you to download movie purchases.  As our speeds continue to increase and as people connect more and more devices to their televisions that can surf the Internet and play movies we will see people using physical media less and less at a fantastic rate.  The idea of instant gratification is too much for most people to resist.  The fact that a download is cheaper will play second fiddle to the convenience factor.

Add to this important facts like that BluRay just hit the critical 100,000 units mark two months ahead of traditional DVD which, as it was, was one of the fastest adopted new technologies ever.  BluRay is set for rapid market domination.  And consider that DVD was essentially unrivaled and BluRay has this stigma of HD-DVD to contend with.  And now Microsoft has released their new XBOX 360 with HDMI (the killer feature of PlayStation 3 until now) but decided to forego including the HD-DVD drive as the market has shown little to no interest in it.  If Microsoft isn’t going to promote their own format who will?  HD-DVD is dead.  BluRay won.  Game over.

Now if only BluRay can hold out against the Internet for any length of time.  I predict rapid proliferation of simple file formats that are carrier agnostic and transparent and almost instantaneous switch from physical media to the online media world.

]]>
https://sheepguardingllama.com/2007/03/and-the-winner-is-bluray/feed/ 1
Technical Web Design with XHTML and CSS https://sheepguardingllama.com/2007/03/technical-web-design-with-xhtml-and-css/ https://sheepguardingllama.com/2007/03/technical-web-design-with-xhtml-and-css/#respond Thu, 01 Mar 2007 15:59:03 +0000 http://www.sheepguardingllama.com/?p=1770 Continue reading "Technical Web Design with XHTML and CSS"

]]>
Six day class that I taught at Castile Christian Academy covering strict technical web site design using standard XHTML and CSS. Roughly the equivalent of a collegiate class in web design. Few college courses actually go this far and cover this much from the technical perspective. This class does not cover JavaScript or other web programming methods. This is strictly a class covering the technical aspects of building a single web page.

Due to the length of the course it was split into six days (one day per week) and the videos have been split into a dozen parts.

Technical Web Design with XHTML and CSS by Scott Alan Miller Day 1a
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 1b
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 2a
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 2b
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 3a
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 3b
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 4a
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 4b
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 5a
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 5b
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 6a
Technical Web Design with XHTML and CSS by Scott Alan Miller Day 6b

Xvid videos in 320×240, perfect for playback on Linux, Windows, Mac OS, Creative Zen Video, etc. Videos hosted courtesy of OurMedia and the Internet Archive.

This class does not cover any theory of aesthetics in web design. Nor does it cover server administration or management. The goal of this class is to take the student from only a basic knowledge of the web to possessing the skills necessary to pursue a career in professional web design.

]]>
https://sheepguardingllama.com/2007/03/technical-web-design-with-xhtml-and-css/feed/ 0
MPAA Movie Rating Scam https://sheepguardingllama.com/2007/02/mpaa-movie-rating-scam/ https://sheepguardingllama.com/2007/02/mpaa-movie-rating-scam/#respond Wed, 21 Feb 2007 15:45:05 +0000 http://www.sheepguardingllama.com/?p=1760 Continue reading "MPAA Movie Rating Scam"

]]>
I have long been upset with the MPAA and their movie rating system (you know, G, PG, PG-13, R, etc.)  They are a private company set up to support the movie industry and they answer to no one.  They are unmonitored and very secretive.  I have never understood why movies are given a useless blanket rating and not rated in areas so that parents (or direct viewers) could make their own decisions of what they were concerned about seeing (some people don’t care if they see violence or suspense but don’t want to see nudity or someone might not be as concerned about language as someone else but might be concerned about drug use, etc.)  I have often felt that there are many PG movies that are very inappropriate for the majority of children and plenty of R movies with hardly anything wrong at all.

Take Evil Dead for example.  If I was a rater or a parent deciding if my kids could watch that movie I would have given it a PG rating.  Sure there is “blood” but it is SO fake and almost never human blood just “blood” like the house walls bleeding.  No amount of “wood and plaster blood”, in my opinion, should ever take a movie over PG.  What is wrong with red liquid running out of wood?  They didn’t even imply that it was human blood.  Nothing of the sort.  Just a haunted house with walls that bled.  Oh no, don’t let your kids see that!  I guess a hospital documentary would be NC-17 then.  Even if it was just shots of people using ketchup in the cafeteria.

Wil Wheaton has a review today of This Film is Not Yet Rated.  It is a good, short article and I think that anyone who ever uses the MPAA rating (I do not) should read it before relying on such a system.  If you really want to take a stand against the MPAA you can simply do what I do and never go to movie theatres.  The MPAA rating is mostly only used there and you can voice your opinion pretty strongly with your pocketbook.

]]>
https://sheepguardingllama.com/2007/02/mpaa-movie-rating-scam/feed/ 0
Ethics in Neuroscience https://sheepguardingllama.com/2007/02/ethics-in-neuroscience/ https://sheepguardingllama.com/2007/02/ethics-in-neuroscience/#respond Fri, 09 Feb 2007 22:22:14 +0000 http://www.sheepguardingllama.com/?p=1748 Continue reading "Ethics in Neuroscience"

]]>
The Guardian reports today that scientists have created a device capable of foretelling a person’s actions. This device is very interesting and has huge implications. The scientists who have created it are not claiming to be able to read minds but say that they have roughly a seventy percent accuracy for predicting short term actions based on brain activity.

The question that society immediately asks, as does Steven Spielberg in the sci-fi dud Minority Report, is: Is it ethical and/or practicable to judge a person’s likelihood of committing a crime? This is an ethical issue that society is going to have to face very soon as this technology is going to mature at a formidable pace and, like all technology, rapidly out pace society’s ability to comprehend it within the standard framework of ethics and morals. (Similarly, much of society today has little or no ability to relate so-called “digital” crimes to more traditional forms of theft, misrepresentation, harassment, etc. In the future society will learn to see computers as a normal part of life and “digital” crime will just be another mode of traditional crime and not seen as a special case outside of the morals of normal life.)

Let me pose four questions regarding the ability to “read someone’s mind.”

Question Number One: Can a person who intends to commit a crime spend time practicing with a mind reading device to learn how to “intend” to do one thing until the last minute and change their mind at the very last second? This would be a form of “gaming” the system. It might be feasible for a person who intends to mislead a mind reading device by, perhaps, convincing themselves that they won’t do something wrong until the very last second. Or, for organized crime or terrorists, one person could intend to have other people commit crimes but not inform a number of people as to what crime would be committed when or by whom so that an entire cell of people might be willing to commit a crime or act or terrorism but have no foreknowledge of the event circumventing the entire system.

Question Number Two: Does a mind reader take into account the intents of people who have convinced themselves that something is not unethical? Take, for example, all of the people who believe that anything that is available online for download is legally theirs for the taking even if someone previously stole it from someone else. Some of those people (or so I am told) actually believe that what they are doing is legal. If this is true then they do not believe that they are committing a crime. Along the same lines, many people do not believe that it is illegal or immoral to be involved with a crime if the initial crime is committed by someone else. For example: you hire a hitman to kill someone for you. Many people believe that the hitman is a murderer but believe that they, as the actual person instigating the killing, are not committing a crime.

Question Number Three: Do hardened criminals see what they do as a crime? Perhaps the average seasoned bank robber continues to feel that his or her actions are illegal but needs the money or enjoys the high. But what about serial killers? How many serial killers feel that they are going to commit a crime before they actually do it?

Question Number Four: The locked cookie jar scenario. You want cookies. You know you have no willpower to avoid cookies. You lock a cookie jar to keep yourself from eating cookies. You “intend” to attempt to break into the cookie jar but have barred yourself from doing so. Do you have cookie criminal intents? Is it wrong? Is it wrong even if it is you who stopped yourself from stealing a cookie? What if it was someone else who stopped you from stealing a cookie? Are you worse than the person who doesn’t intend to steal a cookie but does absentmindedly at the last second just because they were “there”? Are you a speeder who owes a traffic fine even if you bought a car with a limiter so that you couldn’t physically drive too fast even if you tried to do so?
Question Number Five: What about people – and how many people are like this – who intend to do something wrong some of the time but stop themselves before actually doing it? Maybe this brain reading device does not fall prey to this type of inaccuracy but it seems unlikely that it wouldn’t at least a fair portion of the time.

It seems to me that the nuances of the human intention is far too complex for any machine or even people themselves to express. Won’t criminals just learn to carry a “random crime generator” to allow them to make criminal decisions at the last possible second to remove the ability for intent even though their intent would be significantly greater than it would have been otherwise? If we, as society, cannot truly define intent then how can we judge a machines ability to live up to that non-existent standard?

Perhaps, as Americans, we have another reason to not desire to have a mind reading device used to judge our legal, moral and ethical intentions: “…all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. — That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, — That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it…” – Thomas Jefferson and Benjamin Franklin in the United States’ Declaration of Independence. Any device, instigated by the government, that is designed to eliminate the capacity for citizens to ban together in an attempt to overthrow a corrupt government is not only in and of itself unethical but is in direct opposition to the very letter of the intent of the formation of our nation. Judging the “intent” of others, by the government, is an act of desperation and signifies a government that is no longer representing those that are governed and is solidly within the realm of totalitarianism. Hitler, Stalin, Mao, Castro, Wilson, etc. would have gladly welcomed such tools into their arsenal of anti-libertarianism.

]]>
https://sheepguardingllama.com/2007/02/ethics-in-neuroscience/feed/ 0