Terrapin Consulting

IT and Project Management Consulting

  • Home
  • Services
    • Project Management
    • Strategy-to-Project Alignment
    • Education/Training
    • Speaking Engagements
  • Blog
    • Archives
  • About
Home 2014 Archives for April 2014

Archives for April 2014

April 23, 2014 By PM Dom

One Way to Understand Agile Methodologies

I describe the difference between the traditional waterfall methodology with an Agile methodology as the difference of going vertical vs. going horizontal. You can see this in the difference between the two pictures below.

The first picture represents the typical waterfall project schedule. You essentially do all the planning, then all the design, then all the coding, then all testing and then you implement. This is not exactly right – there is usually overlap between the phases (hence waterfall) but this is more or less accurate. I call this “going vertical.”

Agile Example 2

 

 

 

 

This next picture is how the same project would be done using an Agile methodology. You would take Feature 1 and plan, design, code, test, and, maybe, implement before moving to Feature 2. I call this “going horizontal.”

Agile Example 1

 

 

 

 

It’s easy to see the advantages of each technique. Agile (going horizontal) enables you to go through all phases early in a project. You discover coding or testing issues on the first go-around so you can improve each iteration. But the downside with Agile is you may discover something while designing Feature 4 that forces you to re-write early functionality. So you increase the probability of re-work but you lower risk of discovering issues late in the schedule..

In the traditional waterfall methodology (“going vertical”), you rarely have to re-design something because you have taken everything into account during the (one) design phase. Of course the issue with waterfall is you don’t discover coding or testing problems until late in the project schedule – maybe too late.

When appropriate, I use Agile since it lowers the risk of a project. However, Agile is not appropriate for many types of projects or in some companies and the majority of projects are still managed using a waterfall methodology.

Filed Under: Project Management

April 22, 2014 By PM Dom

Most Active Project Managers on Twitter

Using Scraperwiki, I analyzed the top project management tweeters for a month (March 15-April 14, 2014). I gathered tweets using the #pmot hashtag. Other hashtags such as #pm, #projectmanagement and #pmi are common but #pmot seems to be the standard for Project Management Online Tweets.

The overall stats for this 31-day period:

  • 10,288 tweets in total
  • 330 tweets per day on average
  • 1,585 distinct Twitter screen names

The highest number of tweets, by far, was from @PMVault with 2,285 tweets, nearly 1,700 more than the second most. However, @PMVault just tweets job postings so it is useless for most PMs. I eliminated it from the rest of the analysis.

The rest of the top 10 looks like this:

Top PM Tweets

Tweet NameTweetsFollowersPerson
ProjManagers4225309
ProjDirectors3705022
ARRAPM301931Allen Ruddock
AllThingsPMO213458Alison Murray
humanwareonline202304
BarryHodge173822Barry Hodge
PM4TM1291084Cesar Abeid
pmoplanet1291761Ralf Finchett
cobaltpm1177879
thePMObox11393000Bernardo Tirado

Additional information is in this table:

Tweet NameWesbiteContent
ProjManagershttp://projectmanagers.org/Blog articles by volunteers
ProjDirectorshttp://projectdirectors.org/Blog articles by volunteers
ARRAPMhttp://www.arra-pm.com/Blog articles
AllThingsPMOEvery day stuff
humanwareonlinehttp://www.humanware.it/Blog articles (in Italian)
BarryHodgehttp://projectnewstoday.com/Blog articles from many sites
PM4TMhttp://pmforthemasses.com/Podcasts
pmoplanethttp://www.pmoplanet.com/Blog articles, aggregated articles on PM
cobaltpmhttp://cobaltpm.com/Blog articles
thePMOboxhttp://paper.li/thepmoboxAggregated articles (not on PM), podcasts, his book

Filed Under: Project Management, Research

April 17, 2014 By PM Dom

If Your PM Works 50-60 Hours Per Week, Then They are Doing It Wrong

Many of the PMs that I talk to tell me that they are so, so busy – averaging 50-60 hours per week or more. To be provocative, I say “You must be a really bad project manager.” My point being is that if you are unable to manage your own time effectively, why should you be trusted to manage other people’s time?

To be sure, many people exaggerate the number of hours they work. Averaging 60 hours per week of work is damn hard. I suspect people include their commute in that number. And some confuse at work with work. But I’m sure some people really work 60 hours per week. And that means they are doing it wrong.

My main point is above – why are you so bad at managing your time? I’m positive you did not schedule yourself for 60 hours per week – probably 40 hours (or slightly less) like everyone else on the project. So if you’re scheduled for 40 hours and work 60 that means your estimates are off by 50%! And more shocking is that estimates for how long it takes you to do something are going to better than estimates for the other team members. So your entire schedule is going to be off by more than 50%. Interestingly, the average schedule overrun is 60% (according to the Standish Group).

Assuming your PM is working 60 hours per week, I am concerned that they are either doing menial tasks or the quality of the work is low. Research proves that the quality of work goes down the more hours that are worked. This is common sense – we make more mistakes at 9pm than we do at 9am. Now, if a PM is doing menial tasks (you know what I’m talking about – PowerPoint slides, tweaking Microsoft Project just one more time, etc.), the decrease in quality may not occur or be noticeable. But that is also concerning. As a PM, I work on the hard things – the risks and issues that require concentrated effort and innovative thinking. I can’t be effective if I’m doing this at midnight – my solutions will not be elegant or functional. If your PM can consistently work 60 hours per week with no noticeable decrease in quality then they aren’t focused on the right things.

In any particular year, I work unplanned overtime just a few times per year. (Note the word unplanned – because some IT project work must be done on weekends, I plan to work several weekends per year.) If I see a PM working a large number of hours, it’s a red flag to me.

If your PM is consistently working 60 hours per week then they are doing it wrong.

Filed Under: Project Management

April 17, 2014 By PM Dom

Providing the Best Advice to Clients

I base my advice to clients on three things:

  1. Standards
  2. Peer-reviewed research
  3. Non peer-reviewed research

Too many consultants base their advice on personal experience. Personal experience is, of course, necessary and valuable but not sufficient. As the put-down goes: the plural of anecdote is not data. You must have a scientific reason to support the advice you provide.

Standards

I use standard methodologies when I manage projects. Typically this is the PMBOK framework, or Scrum for Agile projects, or the INCOSE V model for systems engineering, etc. I learned to do this by doing the opposite at the beginning of my career when I was working for one of the Big Four consulting firms. At that firm, we used proprietary methodologies. To a degree, clients hired us because of that – before the internet, best practices were not widely known and you had to hire a consulting firm to get that knowledge. But it also locked our clients into working with us – breaking up or switching vendors would be very difficult. That didn’t seem right to me so when I started Terrapin in 2003, I committed to only using standard, open methodologies. It makes it very easy for me to transition work to my clients.

It is good to base advice on standards because of the vast amount of data that surrounds the standard. The standard itself has been developed and vetted by many experts in the field. And then companies that adopt the standard produce data that is used to improve the standard. For example, PMBOK is on its fifth edition.

Using standards also helps on-board new employees or vendors. If you use PMBOK then you know for a fact that every PMP in the country (600,000+) knows how to manage projects within your organization.

Peer Reviewed Research

I use peer-reviewed research to underpin my advice too. Most peer-reviewed research is scientifically valid. The methodology and analysis are well thought-out and, by definition, it has been reviewed by others in the field. The biggest problem with peer-reviewed research in project management is there is so little of it. PM is not treated well by most universities. Economics, finance, operations, marketing and other disciplines receive massive attention and funding. I suspect there are just a handful of active researchers in PM.

I will be highlighting the most impactful research I have ever read on this blog.

Non Peer Reviewed Research

Non peer-reviewed research can be as valuable as peer reviewed. With peer-reviewed research, you know someone has vetted it. With non peer reviewed research, you have to do it yourself. So I tend to rely on research from established sources that I trust. The biggest source is Gartner – they produce excellent research. I also enjoy the annual CHAOS Report from the Standish Group.

The large consulting firms such as McKinsey, Deloitte and PWC also produce research. Their research tends to be survey-based. They usually survey their clients, Fortune 1000 companies, so the results are skewed towards big company topics. But the results are beneficial and can be useful when understanding the opinions of CIOs in certain areas.

Filed Under: Project Management

April 8, 2014 By PM Dom

Affordable Care Act Website, US Edition

Following my post on the Oregon ACA website and the Maryland ACA website project failures, I am following up with the federal ACA website project. Unlike Oregon, there has not been a (publicly-available) thorough audit of the failure. Some data points have been widely reported: 70% of users could not even log in at launch and only eight people were able to sign up on day 1. I’ve read where stress testing before launch indicated the site could only handle 1,500 users – even though over 30 million Americans are uninsured. I also understand the team was not co-located (White House in D.C., CMS in Baltimore, contractors in Columbia, MD) and decision-making was difficult.

Steven Brill has written a story at Time (gated, here). It’s mostly about the team sent in to rescue the project. The first thing the rescue team was to develop a dashboard. Brill quotes one saying it was “jaw-dropping” that there was no “dashboard  – a quick way to measure what was going on at the website…how many people were using it, what the response times were…and where traffic was getting tied up.” Implementing a dashboard was job 1 for the rescue team.

Brill notes “what saved it were …stand-ups.” He further says  that “stand-ups…are Silicon Valley-style meetings where everyone…stands-up rather than sits….” Brill implies that California software companies invented stand-ups. While I have no doubt that stand-ups are popular in Silicon Valley, I doubt they were invented there. I worked for an admiral (Craig Steidle) in the ’90s that did daily stand-ups and stand-ups are a key component of the Scrum methodology. Many, many companies use stand-ups. I guess journalists are unfamiliar with them.

I look forward to a real audit of the project. Brill’s article is entertaining but not illuminating as to the problems that plagued the project. The federal ACA website project has common characteristics with state-led ACA failures – poor choice in contractors, poor project management, no risk management, lack of a single point-of-authority and poor oversight. Perhaps we can have the GAO review the effort – we must learn form these large IT project failures, otherwise we are doomed to repeat them.

Filed Under: ProjectFAIL

April 7, 2014 By PM Dom

Affordable Care Act Website, Maryland Edition

I recently documented the failings of the Oregon ACA website. Now its Maryland’s turn. Maryland has commissioned an audit so we should get a good inside look. But the audit is not scheduled to be done until the summertime. We do know a lot about this failure already. According to documents from the IV&V contractor obtained by the Washington Post, these problems existed early in the project:

  • Insufficient State staffing
  • Insufficient time for software development
  • Lack of a detailed project plan
  • Inefficient communications
  • Lack of sufficient time for regression and system testing
  • Lack of a comprehensive QA and testing plan

The prime contractor, Noridian, has been terminated. A substantial amount of their fees has been withheld by the state.

There was also significant issues between Noridian and one of their subcontractors, EngagePoint. Interestingly, I read where Noridian hired EngagePoint since it had no real ability to develop the exchange. I don’t know why Maryland awarded a contract to a company that had to quickly hire a subcontractor to do the work.

Most recently, Maryland decided to scrap the $125M effort and adopt the same technology used in Connecticut. Adopting the CT exchange technology is expected to cost an additional $50M. It makes me wonder why some states didn’t just do join together in the first place. Or staged a phased implementation so mistakes could be corrected before a nationwide launch.

Filed Under: ProjectFAIL

April 7, 2014 By PM Dom

Affordable Care Act Website, Oregon Edition

There is a fantastic audit of Oregon’s attempt to implement the Affordable Care Act (ACA) (“Obamacare”). Highlights include:

  • Oregon combined ACA implementation with a complex project that determines if people eligible for healthcare subsidies are also eligible for other government programs. To state the obvious: you NEVER combine a high risk project with another high risk project.
  • Oregon chose Oracle as the software provider and paid them on a time and materials basis. Contractors are usually paid on a deliverable or milestone basis so the risk is shared.
  • Oregon paid Oracle through a contract vehicle called the Dell Price Agreement. The name alone indicates this may not be the proper way to contract with Oracle. In total, Oregon signed 43 purchase orders worth $132M to Oracle.
  • Oregon did not hire a systems integrator, a common practice in these types of projects.
  • Oregon hired an independent QA contractor, Maximus. Maximus raised numerous concerns throughout the project. They never rated the project anything but High Risk.
  • Lack of scope control, a delay in requirements definition, and unrealistic delivery expectations were cited as main issues.
  • As late as two weeks before the October 1 launch, the Government PM was positive about the project: “Bottom Line: We are on track to launch.”

Who Keeps Moving the Goal Post?

One interesting point in the audit is the difficulty in accurately estimating how much effort it takes to do software development. As they were progressing, they were developing estimate-to-completion figures. As shown in the chart below, the more work they did, the higher the estimate-to-completion got. As they got closer to the end, the end got farther away. D’oh! In general, this shouldn’t happen. The more “actuals” you have, the better your estimates should be.

Who Knew this Project Would Fail? These Guys Did.

The audit quotes the QA contractor, Maximus, throughout the report. They appeared to have a good grasp on what was happening, pointing out risks throughout the project. A summary of their risk assessments is shown below. The top row, which is Overall Health, is red (high risk) from Day 1. Unfortunately, the report says the project team became “de-sensitized” by the risk reports since they were always bad. Perhpas Maximus should have started with a green (low risk) status.

If One Oversight Group is Good, More Must Be Better

There were several agencies that had oversight responsibilities. Naturally this caused confusion and blurred lines of responsibility. A common refrain was the lack of a single point-of-authority. The audit doesn’t make a recommendation but I will: Oversight should be the responsibility of a single group comprised of all required stakeholders. I have seen many public sector projects that believe additional levels of oversight are helpful. It is not. They serve to absolve members of responsibility. They can always say they thought someone else was responsible for whatever went wrong. If there is only one oversight group, then they can’t point fingers anywhere else and they are more likely to do their job well.

Like Hitting a Bullet with Another Bullet

As noted above, Oregon combined ACA implementation with a complex project that determines if people eligible for healthcare subsidies are also eligible for other government programs.

The Standish Group produces a famous report (the CHAOS Report) that documents project success. The single most significant factor in project success is size. Small projects, defined as needing less than $1M in people cost, are successful 80% of the time while large projects, over $10M, are successful only 10% of the time. (Success is defined as meeting scope, schedule and budget goals.)

So Oregon decided to combine the high risk Obamacare website project, with a projected success rate of 10%, with another high risk project with success rate of 10%. That’s like hitting a bullet with another bullet.

Everything was On Track, Until It Wasn’t

As late as two weeks before launch, the Government PM reported that the launch was on track. The audit notes the system failed a systems test three days before launch and the launch was delayed the day before launch. (It appears the system test conducted three days before launch was the first system test performed.) Even while announcing the delay, Oregon said the site would launch in two weeks (mid-October). By November, the launch consisted of a fillable PDF form that was manually processed. The site had yet to launch six months later (March 2014).

There is a common story told in project management: “how did the project get to be a year late!” “One day at a time.” By April before the October launch, the project was months behind. As the spring and summer progressed, the project fell further behind. And yet, the PM continued to believe they would “catch up” and finish on time. I don’t know if its ignorance or malfeasance at work here. But it is virtually impossible to “catch up.”

One funny (in a sad way) part. The development effort was planned for 17 iterations. So what happened when they completed iteration 17 and still weren’t done? Iteration 17a, 17b, and 17c. Ugh. Also, the use of the word iteration implies an Agile methodology but this isn’t indicated in the audit. It wouldn’t surprise me that Agile was abused misused used on this project.

The audit has many lessons learned. I encourage you to read it all, especially if you are undertaking a large system implementation in the public sector.

Filed Under: ProjectFAIL

Twitter

Tweets by @DominicLepore

Recent Posts

  • It’s Not Micromanagement
  • Benefits Realization
  • Why PMP?
  • Active Learning in the Classroom
  • How Not to Design a System

© Copyright 2014 Terrapin Consulting LLC · All Rights Reserved · Powered by WordPress &middot