Monthly Archives: November 2013

Cubesats galore

I’m sure everyone has read, seen, or heard about the latest feat regarding cubesats this November:  29 cubesats were deployed from one rocket launched in the US; and another 33 cubesats were deployed from a rocket launched by the Russians.

The US launch was done courtesy of Orbital Sciences from Wallops Island, in Virginia using a Minotaur I rocket.  The launch occurred 20 November and you can watch the video below.

On 21 November, the Russian Dnepr launch was conducted by Kosmotras from Yasny, in Orenburg, Russia.  There does seem to be some discrepancy in numbers, as the Kosmotras site only says 24 cubesats were deployed.  You can go to their site to see what payloads were released and who owns them.  It looks like all satellites from the Russian launch were released into a sun-synchronous Low Earth Orbit.  Video of that launch below.

This fact might interest you:  both launch systems are based on intercontinental ballistic missile systems that probably targeted areas in either the US or Russia.  The Dnepr is based on the SS-18 Satan and the Minotaur I is based on the Minuteman II.

There is another way to deliver cubesats into orbit, though, and it’s pictured below.  You can just shoot them from a launcher on-board the International space station.  You can read more about that from Discovery News.  And while Discovery calling it a cannon is hyperbole, it gives a person a great idea of what’s happening with them.  Sounds more intriguing, too.

The JEM Small Satellite Orbital Deployer in action. Cubesat Cannon just sounds so much better, though.

Eighteen years after its start, SBIRS still not quite replaces DSP (or, the Air Force gets less by spending more)

SBIRS GEO? So you’re looking at half the constellation, then. Click to embiggen. Picture from wikimedia.

Sad but true.  According to this post on Spaceflightnow.com’s site, prime contractor Lockheed Martin and its customer, the United States Air Force (USAF), are slowly and expensively achieving a goal.  That is, they are replacing older USAF Defense Support Program (DSP–and more DSP info here) satellites with Space Based Infrared System (SBIRS) GEO satellites.  The SBIRS GEO-2 satellite is officially operating as the newest part of the USAF’s early warning infrared satellite constellation.  The majority of this constellation consists, currently, of DSP satellites.

DSP, it’s what currently works…click to embiggen. Photo from USAF.mil site

Consider, the older “block” (generation) of five DSP satellites cost about $400 million per satellite (which this USAF fact sheet confirms).  This makes the DSP program seem like an outright bargain compared to the $17.6 billion the USAF has spent on the two SBIRS GEO satellites so far.

But that’s not what I want to talk about.  This article really is just a hint of the next lesson series I would like to write for you.  What do both SBIRS and DSP have in common, aside from the infrared sensor payload?

Again, for the American readers, have a great Thanksgiving!  Everyone else, tomorrow and perhaps Friday is time off for me, so have a great rest of the week.

NOAA’s low hanging problem–Part 7

The last post, part 6, went into detail about the problems the Independent Review Team (IRT) brought forward to the National Oceanographic and Atmospheric Administration (NOAA) regarding its satellite programs.  These were the problems the IRT found and documented in their 2012 assessment report:  Oversight and decision process, governance, JPSS Gap, programs, and budget.  The 2013 assessment report that followed was the IRT’s attempt to see how seriously the NOAA was taking these problems.

KUMBAYA?

Stunningly, the IRT noted the NOAA had resolved most of the problem areas.  As far as the IRT was concerned, the NOAA was well on its way to becoming one happy family and making its satellite programs healthy.  But then a look at the color charts on pages 9 and 10 (shown below) of the IRT’s 2013 assessment tells a slightly different story.  And somehow, some different issues, related to the Joint Polar Satellite System (JPSS) come to the fore:  (JPSS) gap policy and implications, gap mitigation, and program robustness.

From 2013 assessment report--click to go there.

Page 9 from 2013 assessment report–click to go there.

Page 10 from 2013 assessment report--click to go there.

Page 10 from 2013 assessment report–click to go there.

 

SO WHAT HAPPENED?

It’s not clear how the NOAA fixed the oversight and decision process.  As explained in part 5 of this post series, the work environment sounded really hostile.  One key question prompted by such incivility is:  what drove the upper management in these programs to manage minutely, untrustingly, and hostilely?  Those upper program managers are influenced and rewarded by higher ups in the NOAA.  Someone above (the boss’ bosses?) them was at the very least aware of the dysfunction going on in these programs.  At the very worst, they not only tolerated such a situation, but encouraged it.  The boss’ bosses may have wanted someone thought to, ironically, “get it done,” no matter what or who got broken.  And according to the IRT’s report on page 4, those boss’ bosses were not reviewed.

There’s a very strong possibility then, the oversight and decision problem didn’t exist just within the programs the IRT was evaluating, but externally—integrated throughout the NOAA to be dealt with daily.  An educated guess based on the IRT’s assessments would say such management was institutionalized, not just a personality or two.  Was there a full scale change of leadership and personnel within the whole NOAA?  That’s doubtful.  So what changed?  What could’ve made the oversight and decision process better in the IRT’s eyes?  The IRT really doesn’t say anything about the NOAA’s response to the 2012 assessment, other than the issues seem to be fixed.

LED DOWN THE GARDEN PATH?

The answer may be the IRT was sold a bill of goods.  Looking at the charts on pages 9 and 10 of the 2013 IRT assessment report, it should be noted that functional organizations’ roles, responsibility for timely and responsive communications, and JPSS scope of responsibilities are all yellow.  And these are very important issues for contractors and the lower government echelons for day to day work.  The yellow color of the functional organizations’ roles, for example, means there’s still confusion about who is responsible for what and who owns different people to do different functions, like policy, budget, operations, etc.

It means workers are still being yanked around by different managers for different things.  Communications aren’t working?  Hardly a surprise since even healthy organizations are constantly dealing with communications issues.  But this likely means upper management isn’t telling their lower echelons what’s going on, and perhaps, vice versa.  And of course JPSS scope of responsibilities aren’t quite clarified.  This is a program that was created from an existing program in early 2010.  Four years have elapsed since the JPSS’ program inception and people still don’t know the scope of responsibilities?  Interesting.

More on Monday (I think we’ll finally be able to tie this one up)—Happy Thanksgiving!

NOAA’s low hanging problem–Part 6

You learned from the previous post the National Oceanographic and Atmospheric Administration invited the Independent Review Team (IRT) to find issues in their satellite programs, like the Joint Polar Satellite System (JPSS), which were causing program slowdowns and costing more money.

The IRT found five different concerns:  Oversight and decision process, governance, JPSS Gap, programs, and budget.  They elaborated on each one of these concerns which you can read about in their 2012 assessment report.  The gap-filler solution to the JPSS Gap concern was the instigator of this series of posts.  Oversight and decision process, what that possibly meant and its affects, were also discussed in part 5 of this series.  Governance, the second concern, helps to define what roles different agencies, such as NASA and NOAA, and their subordinate organization play in the program.

Structured approach

Governance structure and definition are normally summarized in an organizational chart.  The IRT showed off two different governance models on page 16 of the 2012 IRT assessment.  The models are shown below.

Governance

Click on chart to go to 2012 IRT Assessment Report

The JPSS model is obviously very different from the GOES-R model.  More complicated.  More fingers in the pie.  At a guess, the JPSS governance model probably caused all sorts of tension related to who’s responsible for what, etc. (including affecting the oversight and decision process).  And that’s the IRT’s point.  The report urges the NOAA to adopt a simpler, more successful model—kind of like GOES-R’s.  It’s amazing organizations even think governance models like the one on the right are the way to do business.

programs

The other IRT concern, Programs, is just a summary of the two different NOAA satellite programs, GOES-R and JPSS, which seems odd.  The IRT likes what the GOES-R program is doing, but the JPSS program has “a significant number of high-level issues” (page 24, 2012 IRT assessment).  It seems like Programs should be at the front of the report.

Budget buffoonery

Then there’s Budget.  A lot of things come down to money, and even the government has to have a budget for its programs.  For JPSS, the program requiring the gap-filler, there’s all sorts of issues the IRT has found:  two years of being underfunded slowed the program down (duh!); program requirements (the list of program “needs” a government determines for a program) are quite critical in the budget equation; JPSS requirements were based on an older program’s requirements (but were never analyzed for validity); there is no Independent Cost Estimate (ICE) of the JPSS program (independent estimates help to give a non-partisan look at costs and such, like the IRT, in theory); and–this is an incredible admission–the IRT just doesn’t understand why JPSS (and GOES-R) costs as much as it does.

These five concerns were the ones the IRT highlighted about the NOAA’s satellite programs.  Were they addressed?  What do you think?  If you’ve been following, you probably already know the answer, but that’s for my next post. 

NOAA’s low hanging problem–Part 5

IRT’s interim solution fixes just one NOAA problem

We finally come to the problems with the Independent Review Team’s (IRT) satellite recommendations to the National Oceanographic and Atmospheric Association (NOAA).  If you’d like a quick overview of the overall scenario, please go to my post here—maybe you’ll come back less confused?

Ultimately, the IRT is recommending the NOAA somehow build interim, or “gap-filler” satellites, at least three, to fill the potential NOAA sun-synchronous Low Earth Orbiting satellite gap.  The IRT’s recommendation:

“A gap-filler project must be started immediately as a hedge against the early potential gap previously described.” (page 18, 2013 IRT follow-up assessment)

This gap is projected to happen possibly as late as 2017 because of satellite mission design life considerations, although it could occur earlier.  You can go to these posts for clarification about mission design life.

Here’s why the recommendation isn’t so simple:  it relies on the same slow, confused, and costly acquisitions mechanisms and decision processes which caused the initial “gap” problem.  Here’s what the IRT suggested as a solution:

“As to the feasibility of rapidly developing and deploying spacecraft, there are examples of small Low Earth Orbit (LEO) spacecraft being completed 2-3 years from start.” (page 18, 2013 IRT follow-up assessment)

Maybe the IRT gave a list of examples of these shorter programs to the NOAA managers when it was briefed.  It would be good to know what their reference standard is.  Unfortunately, the list isn’t in this report.  And the gap problem is just a symptom of the other issues the NOAA was having.

The NOAA was concerned their programs were having issues—satellite programs were getting more expensive and not getting done on time.  This was the original reason the IRT was established.  So, the IRT’s 2013 follow-up assessment was after a 2012 assessment report the IRT originally authored for the NOAA.   And the IRT found some significant problems.  On page 4 of the 2012 report, the IRT had five concerns:  Oversight and decision process, governance, JPSS Gap (the satellite gap), programs, and budget.  Let’s look over each one of these concerns for an understanding why it’s unlikely a “gap-filler” satellite program will be successful.  Some parts leading to this understanding will be longer than others.

Accountable government

To understand the IRT’s meaning when talking about the “oversight and decision process,” is to understand the essence of the government’s role in the world of buying or “acquiring” satellites (or most other things like aircraft, tanks, toilets, etc.).  The government has money (taxpayer money) and has a need, normally described in a list of requirements.  So the government establishes a program for buying something.  In a program, there’s the part of government disbursing money, and the part of government providing program oversight, decisions, and direction.

Contractors and companies work under government’s oversight and according to government’s decisions and direction.  If either contractor or company does any more or less work than required, it’s a breach of the contract, so contractors essentially do what the government has listed as program requirements.  Certain contractors can advise the government, but the government always has the last word in the decision-making process.

This is the part of the process the IRT is talking about.  Just under the “Oversight and Decision Process” heading, the IRT lists six issues as sub-bullets, and elaborates on each issue.  Descriptors like “dysfunctional” oversight—no value added—confusion—ineffective decision-making—not timely or responsive—lack of trust—tell the story and enormity of the NOAA’s problem.

Officespace, by a factor of 10

From cheezburger.com.

If you’re curious, you can read pages 11-15 of the 2012 report.  It sounds like the workplace was very hostile.  The IRT doesn’t say the word “micromanaging,” but their story makes the NOAA environment sound as if that kind of dysfunction was happening.  Unfortunately, the story the IRT paints is very familiar—if it’s familiar to you, then I am sorry you had to work in that kind of environment.

The IRT found the NOAA senior managers were apparently confused in their program responsibilities and authority.  Making an informed guess using the IRT’s sub-bullets as the outline, I believe micromanaging would’ve caused that.  If upper management is reaching down past you, then authority is just a theory for you, nothing else.   Because there’s been a pattern established where all decisions rely on the very upper reaches of management, then program decisions are slow in coming and when the decisions are made, they aren’t effective.  Which was another problem the IRT identified.

Instead of the knowledgeable managing engineer making the decision on the spot, the engineer waits to get permission first, because any program decision made will be always overturned.  This can be attributed to a lack of trust (IRT identified this, too) from upper management of lower management and in turn lower management of the “worker bees” and contractors.  This would also account for the unnecessary meetings (reviews) of “adversarial character” (the IRT’s words), and a tremendous volume of reports.  It’s a very demoralizing and destructive cycle and one which the IRT unhesitatingly called out.

And that was just one major finding…