The forum is locked.

The Ocean Color Forum has transitioned over to the Earthdata Forum (https://forum.earthdata.nasa.gov/). The information existing below will be retained for historical reference. Please sign into the Earthdata Forum for active user support.

Up Topic Special Topics / Inherent Optical Properties Workshop / IOP data ranges (locked)
- - By seanbailey Date 2008-08-06 17:51
All,

Here is a strawman proposal for assigning algorithm failure based
on retrieved data values:

The following thresholds will be used to flag iop retrievals as suspect,
indicating an algorithm failure condition.  As a minimum, the lower
thresholds are set by the values of aw and bbw for the given wavelength.
The maximum threshold values are as follows:

Product     Max threshold
---------   --------------
a, aph, adg      5.00
bb, bbp          0.015
The sum of aw, aph and adg cannot be greater than a (total).  Similarly,
the sum of bbw and bbp cannot exceed bb.  No product should go negative. 
If any product is returned as a value less than zero, the pixel will be flagged
as failed, even if the combined sum would fall above the minimum threshold
for the total IOP in question (i.e. a or bb).

We recognize that total absorption values greater 5 m^-1 can occur, however,
for the global ocean these are EXTREMELY uncommon.  Additionally, the
current product scaling for the IOP products (to go from calculated floats to
stored integers) has a maximum value of 5.777, so a threshold of 5.0 is also a
practical consideration.

Regards,
Sean
Parent - By esjboss Date 2008-08-06 18:15
Sean,
I think you need to reconsider. If we have an uncertainty in an observable that is bigger than the value measured in the validation data a negative solution should be OK. The uncertainties validation data should be driving the threshold.
Cheers,
   Emmanuel
Parent - By esjboss Date 2008-08-06 18:16
Sorry,
I agree with your threshold values (larger than anything we ever observed).
The most negative value that is acceptable should be driven by the uncertainties.
Cheers,
  Emmanuel
Parent - - By zplee Date 2008-08-06 18:35
Sean and all:

It is not that clear to me that "If any product is returned as a value less than zero, the pixel will be flagged as failed." Does that mean, for an ocean pixel, if we got derived aph(550) as negative, then the product of atot, bbp, adg at all wavelengths will be considered as "failed"? If that is the case, a revision of this "faliure" standard is required, I think. This is because, even if we get 'perfect' total absorption, say at 443, we may still get negative aph at, say, 531 (because it depends on the Rrs quality at that band). Yes, we can always use spectral model to remedy such situations. But, before that, it is better be more cautious about calling a "faliure" pixel. Individual stantard for each product at each wavelength is better.

As to the points that Emmanuel raised about dynamic aw and bbw, theoretically I agree. But I would think that the improvement will not be that dramatic, as S is not widely different in oceanic waters, while high-resolution S map for coastal waters is still a research subject, and how applicable of monthly averaged S to MODIS/SeaWiFS daily image is not sure. That is my two cents.

Cheers,
zhongping
 
Parent - - By esjboss Date 2008-08-06 18:46
ZP,
The difference between bbw for fresh and full ocean is 30%. So we need to address this bias (in the least use bbsw and not bbw). For example, the differences between the Red Sea and Puget Sound are as high as 10psu. This will result in a ~8% bias in bbw. Dealing with this bias is easy so we may as well take care of it.
Another 2c',
   Emmanuel
Parent - - By zplee Date 2008-08-06 19:01
Emmanuel:

Yes, I agree that eventually we should consider to use T/S-based aw and bbw. I am just not sure, at now, if adding that T/S variable will improve much or add extra uncertainties, as it requires, in theory, co-registered measurements of T/S.

Cheers,
zhongping
Parent - By Hirata Date 2008-08-07 12:36
Dear all,

I  agree that we should correct bb_w for bb_sw, although we should keep in mind that  the correction does not necessarily improve our results when IOP outputs (after the correction) are compared to NOMAD, because

1) It may be the case that not all instruments (used for collecting in situ data in NOMAD ) use the same protocol of T/S corrections. Some of data in NOMAD may not even be corrected for T/S. There would be inconsistency between data and outputs from IOP models (or, do we know (a) which  data in NOMAD is corrected for T/S, and (b) the correction scheme applied to data in NOMAD, if that's the case, is same as the one we will apply to our IOP models?)

2)the salinity correction itself (that would be applied to our IOP models) would not be perfect but still an estimation of "real" correction, if we use a ***climatology*** of salinity map.

These sound negative, but my opinion is that we still should make the correction, because bb_w ~= bb_sw, which is the reality!  Also, our aim in this workshop is not a "real validation" of each IOP model, but sensitivity comparison, I think.

Another thing:
Now that this T/S correction issue has been discussed, I suggest that validation data (NOMAD v2) includes bbp rather than bb, so that a user will have a choice in selecting bb_w or bb_sw values in the future (if he/she wants bb_tot). Also things will be easier when the correction scheme to our IOP models is updated in the future.

Taka
Parent - - By seanbailey Date 2008-08-06 19:38
ZhongPing, Emmanuel,

My strawman proposal is open for discussion. In fact, this is
why it was posted ;)

Here's my perspective on the matter.  I agree that measurement
uncertainties need to be considered, however, this should be do
at the level of what we 'measure' - in the remote sensing case,
it is radiance (or reflectance).  A slightly negative reflectance, given
our uncertainties on its retrieval is acceptable.  An IOP inversion
algorithm, however, isn't a measurement.  The IOPs derived from
an inversion algorithm should all be geophysically reasonable.
Negative a or bb isn't reasonable.  It gets a little fuzzier with adg
and aph, given the manner in which some of the algorithms derive
these parameters - but no way can there be negative bbp. 

Now, I'll grant  that not all parameters at all  wavelengths are the primary
retrieved values from a given IOP inversion routine.  With that in mind,
I could see modifying the test to be only on the primary retrieved parameters. 
We may eventually arrive at per product, per wavelength standards, but in
reality the products and wavelengths are integral to each other - can't really
have one be good while another is bad - that indicates (to me at least) a
fundamental flaw in the inversion.

Remember, our goal is global, we can afford to be a bit stringent -
it is a big ocean :)

If you (that includes anyone out there, hint, hint) want to propose alternative
strategies, by all means do so.  My offering was to get the ball (and discussion)
rolling.  We're looking for consensus, not edicts :)

Sean

P.S.

As for basing the uncertainties on those within the validation data set,
I see that as a fool's errand.  The current in situ archive of IOPs
has far too large a measurement uncertainty for practical use beyond
a bulk "am I in the ballpark" analysis. 
Parent - - By zplee Date 2008-08-06 20:12
Sean:

Two more little comments:

1) Yes, bbp should not be negative. But, in reality of remote sensing, if Rrs is strangly small due to some strange reasons, a 'practically' negative bbp will be resulted. Which does not automatically mean any algorithm is flaw yet, but certainly raises a flag to dig into.

2) About "can't really have one be good while another is bad". Actually this can be really possible in the sensing world. Fundamentally, Rrs is a collective measure of ratios of bb to a. We then, 'magically', separate these bb and a into different components via 'algorithms'. We can have 'perfect' a and bb, but significantly bad (or even negative) adg or aph (depends on their relative contributions and etc etc ...).

It is nice to see the ball rolling, though ... :)

Cheers,
zhongping
Parent - - By seanbailey Date 2008-08-07 02:45
Agreed.  Unreasonably low Rrs may give a negative bbp.  We shouldn't expect
an algorithm to get a meaningful retrieval given unreasonable input.  We also
shouldn't accept that output as legitimate, even if it is only slightly negative.
It's wrong and cannot be used further, so we should flag it so.

I stand by my statement as to the good and the bad.  You'll have to prove to me
that given a perfect a and bb, aph or adg can be negative.  Even if
it is possible in the realm of IOP inversion algorithms, how can you know that
you're getting good a and bb if the derivative products don't make geophysical sense?
I think it safer to flag it as a failure.  Yes, fundamentally we can really only get at
a and bb from Rrs, but I hope more than magic is involved in teasing out adg and aph!
I still see it as a failure if the inversion routine returns geophysically unreasonable
adg and aph - although to give a nod to Emmanuel, there is uncertainty in the
retrieval - as I said earlier, adg and aph are a little fuzzier...

Perhaps we need more than a binary flag...

Sean
Parent - - By esjboss Date 2008-08-07 05:15
Sean,
I don't agree, and this is why.
If you delete all the negative (but within uncertainty) results, you will bias the spatially (or temporal) average mean to be higher than it should be.
Cheers,
   Emmanuel
Parent - - By seanbailey Date 2008-08-07 13:02 Edited 2008-08-07 13:24
Emmanuel,

I never said delete them.  I said flag them. We've had the issue with
what to do with those pesky negative nLws when we bin.  To date,
they are NOT included in the bin files.  Bryan (and I agree completely
with him) has argued for quite some time that we should be including
them for the very reason you state for the IOPs here.  However,
a and bb are not the same.  It is reasonable to expect that nLw may go
negative, it is NOT reasonable to expect that a and bb may do so as well.
If the uncertainties on the retrieval are so large that they exceed pure water
values, that algorithm needs some serious attention.

I've relented a bit on the adg/aph - someone needs to come up with a resonable
estimate of the uncertainties....any takers?

Sean
Parent - By Hirata Date 2008-08-07 16:27
Sean,

My own test analysis showed that,  when aph(670) < 0 for example, some other IOPs(adg, bb etc) at the same or/and other wavelengths were retrieved well. Since this is not just my guess but result of the analysis, I would suggest that we should consider the failure flag for each product and each wavelength.

Also nLw is not strictly measured quantitiy, but it is rather a product derived (or estimated) from measurement of radiance at TOA  (using atmospheric correction etc). Bear this in mind, I am not sure if it is good idea to include negative nLw (or Rrs) in our analysis, even if it is only slightly negative. I would rather be surprised if IOP models returns good retrievals, when input nLw or Rrs is negative.

Also Rrs_exact in NOMAD v2 subset, which I suppose that you  used Morel's f/Q correction(?), is not really a measurement. The f/Q is derived using a particular shape of VSFs in his simulation (although he varied VSF as a function of Chl). But the VSF used in the simulation is still  the modelled VSFs, not measured VSFs. Since we are talking about radiance, not irradiance, we really need an attention to VSF. I am not opposing the use of his simulation results, but we need to keep this thing in mind. Actually I just wanted to ask you if you want us to use Rrs_ex for the workshop, instead of Rrs.

Cheers,

Taka
Up Topic Special Topics / Inherent Optical Properties Workshop / IOP data ranges (locked)