Discussion:
Targen command line
carlo rondinelli
2013-12-14 17:59:57 UTC
Permalink
Hello, until now I have used a targen command line very simple to
generate the patch file. I studied the online Argyll documentation and
would like to improve the command line, but I'm not sure of my choices.
The "old" command line is this:

targen -v -d2 -f1658 CanonPro-9500-Mark-II_xyz

what I would use now is this:

targen -v -d2 -G -B64 -N1 -f1658 Pro_9500_II_xyz

Do you have any advice for me or there are errors in this new line of
command?

Thank you very much
--
Carlo Rondinelli
Still-life, Fotografia Immersiva e Object per Virtual Tour
mob: + 39 389 9757042
skype: carlopano360
flickr.com/photos/carlorondinelli
alessiasalera.com

Avvertenze ai sensi del D.L.G.S. 196/2003
Le informazioni contenute in questo messaggio
e/o nel/i file/s allegato/i, sono da considerarsi
strettamente riservate.
Il loro utilizzo è consentito esclusivamente
al destinatario del messaggio per le finalità
indicate nel messaggio stesso.
Colui che riceve il presente messaggio
è gentilmente pregato di verificare se lo stesso
non gli sia pervenuto per errore.
In tal caso il destinatario è pregato di avvisare
il mittente e, tenuto conto delle responsabilità
connesse all'indebito utilizzo e/o divulgazione del
messaggio e/o delle informazioni in esso contenute,
di cancellare l'originale oltre a distruggere
ogni copia o stampa.
Nikolay Pokhilchenko
2013-12-14 19:43:20 UTC
Permalink
Post by carlo rondinelli
targen -v -d2 -f1658 CanonPro-9500-Mark-II_xyz
targen -v -d2 -G -B64 -N1 -f1658 Pro_9500_II_xyz
Do you have any advice for me or there are errors in this new line of
command?
The -G parameter have not much sense but I'm using it every time for RGB targets. I hope that with the -G the target will be "more optimal".
What for the -B64? Are your printer black very deep and your instrument very noisy? I'm satisfied with 4..16 "black" patches. More patches have no sense. This is my opinion.
If You plan to concentrate the patches towards gray axis it may be better when You'll provide the targen with preliminary profile. Preliminary profile been supplied to targen may improve the target much more than -G and -N alone, I suppose.
Moreover, I'm not shure that -N1 without preliminary profile will improve grays tolerance because the RGB values for grays may lay rather far from R=G=B line in the device space. The preliminary profile (-c"Prelim.icc" parameter) can greatly improve target value and grays tolerance even without additional parameters for targen.
I'd recommend You:

targen -v -d2 -c"Pro_9500_II_previous.icc" -A0.8 -G -B8 -e8 -w -W Pro_9500_II_xyz

-w and -W parameters are for diagnostic purposes only. I like to view the patches distribution over the color and device spaces (You need a VRML-viewer for that). You can change -N value and compare VRML visualizations of patches distribution with different -N values.
Hope it helps. Criticism is appreciated.
BC Rider
2013-12-14 22:04:56 UTC
Permalink
Hi,

If I make a prelim profile with a 500 patch target and then a final profile using 1000 patch target, I have printed 1500 patches that I'd like colprof to use. However, I understand colprof only uses the final 1000 patch target. In that sense the patches used to make a preliminary profile are "wasted" (for lack of a better term). Is this understanding correct?

In my simple brain, if I tell Targen to generate a 1000 patch target while giving it a preliminary profile that was made with 500 patch target, then Targen should assume the original 500 data points in its calculations and generate 1000 more unique data points. Then Targen should carry forward the original 500 data points so that chartread can combine them into an output with 1500 unique data points (as input to colprof).

Basically I'm trying to wring every possible value out of the 1500 data points printed and I'm not sure Argyll currently does this. Of course it is entirely possible I've got it all wrong and not making any sense. If so can someone please straighten me out! Thanks!
Ben Goren
2013-12-14 22:33:52 UTC
Permalink
Post by BC Rider
In that sense the patches used to make a preliminary profile are "wasted" (for lack of a better term). Is this understanding correct?
Yes, but you can use all those patches if you like. See:

http://www.argyllcms.com/doc/average.html

and pay attention to the -m flag.

Cheers,

b&
BC Rider
2013-12-15 00:19:21 UTC
Permalink
----------------------------------------
Subject: [argyllcms] Re: Colprof - utilizing data points from the preliminary profile
Date: Sat, 14 Dec 2013 15:33:52 -0700
In that sense the patches used to make a preliminary profile are "wasted" (for lack of a better term). Is this understanding correct?
http://www.argyllcms.com/doc/average.html
and pay attention to the -m flag.
Thanks a lot! That gets closer assuming:

1) Colprof ignores the sample numbers and patch id's since merging creates a lot of duplicate labels (even though the content is completely different), and

2) Colprof averages any duplicate samples (just like it averages the duplicate black/white samples)

Is this known to be so?

Having said that, I don't think it quite does the full job. Since Targen is unaware, the points Targen creates for the final 1000 patch target may coincidently overlap or nearly overlap the initial 500 patch target. So the full value of 1500 patches is not realized. Presumably Targen needs to do this so the final set can consist of 1500 unique points optimally placed.

I suppose one could make the preliminary profile using a contrasting or complementary algorithm (from the final) in an attempt to maximize the value of the combined set.

In any case, simply merging the two data sets seems better than nothing and I can't see a downside.
Ben Goren
2013-12-15 01:03:23 UTC
Permalink
Post by BC Rider
Since Targen is unaware, the points Targen creates for the final 1000 patch target may coincidently overlap or nearly overlap the initial 500 patch target.
The point of preconditioning is to oversample the most problematic colors for the device. (Not directly, but you can not unreasonably think of it that way.) On the vanishingly small chance that some of the 500 colors (aside, of course, from the 100% / 0% patches) are exact duplicates (out of the billions of possibilities), that's a good thing since those'll be exactly the colors you most want to have multiple samples of.

When merging, the ``average'' command does whatever it needs to to create a coherent .ti3 file that you can feed to colprof. It does not do any averaging in that mode. However, colprof itself will still make use of all the sample points in a way that, for your purposes, is no different from if you had averaged them before. I'm sure there're differences, but you're not doing anything precise enough for those differences to be meaningful. If you're curious about what those differences are, I'm sure Graeme would be happy to elaborate....

Cheers,

b&
BC Rider
2013-12-15 21:42:13 UTC
Permalink
Post by Ben Goren
Post by BC Rider
Since Targen is unaware, the points Targen creates for the final 1000
patch target may coincidently overlap or nearly overlap the initial
500 patch target.
The point of preconditioning is to oversample the most problematic colors
for the device. (Not directly, but you can not unreasonably think of it that
way.) On the vanishingly small chance that some of the 500 colors (aside,
of course, from the 100% / 0% patches) are exact duplicates (out of the
billions of possibilities), that's a good thing since those'll be
exactly the colors you most want to have multiple samples of.
I see it a little differently. Oversampling the space is not a good thing (IMO). It translates into non-optimum sampling since for a given number of points there will then, by definition, but under-sampled areas.
Post by Ben Goren
From my perspective, the point of the preconditioning profile is to define the space more accurately so the profiling engine does NOT oversample (or undersample). The goal should be to optimally sample the space so visual errors are evenly distributed. I believe that is what Argyll does.
That's why I see merely merging the two patch sets as non-optimum. Since they are unaware of each other the combined points will not be optimally placed in the device space. If Targen was aware of the existing data set, it could take those into account in placing the remaining 1000 data points to best evenly distribute the errors.

However, even if Targen is aware of the existing 500 data points it may not (in fact, probably not) be able to achieve the same level as optimization as if it had the 1500 data points to freely place.

For this reason, it seems to me there is a trade-off in the choice of making a preconditioning profile or not. For a given number of patches (ie 1500) the benefit of defining the device space for Targen must surpass the loss of those data points in the making of final profile. It may not always be worth it. Or at least not worth consuming many of the available patches. My suggestion to "carry forward" the preconditioning data points in Targen to the final profile was an attempt to make this trade-off less critical.
Post by Ben Goren
When merging, the ``average'' command does whatever it needs to to create a coherent .ti3 file
Actually patch IDs become non-sensical, sample numbers repetitious and non-monotonic (etc.) so the resulting .ti3 file seemed a tad incoherent to me. Hence my comment. Anyway, I did a test and Colprof doesn't seem to care so, as you say, it seems fine.
Nikolay Pokhilchenko
2013-12-16 09:29:11 UTC
Permalink
...
From my perspective, the point of the preconditioning profile is to define the space more accurately so the profiling engine does NOT oversample (or undersample). The goal should be to optimally sample the space so visual errors are evenly distributed. I believe that is what Argyll does.
That's why I see merely merging the two patch sets as non-optimum. Since they are unaware of each other the combined points will not be optimally placed in the device space. If Targen was aware of the existing data set, it could take those into account in placing the remaining 1000 data points to best evenly distribute the errors.
I'm asking targen "enlarge the target" function which means to add new patches to existing data optimally. For example, I have 500 patches freshly red patches data. I'm restricted in printing a high number of patches but discovered that 500 current patches isn't enough to build a quality profile. I can print 500 patches more. But if new 500-patches target will be generated without taking the previous target in account, many of they patches will likely be placed in the proximity to previous target patches. The target wouldn't be optimal because of patch doubling (not exact doubling but in small area). So the accounting of previous target needed. The patches from previous target will be "fixed" while new target generation. I hope the current optimization algorithm permits this.
Graeme Gill
2013-12-18 04:44:39 UTC
Permalink
Post by Nikolay Pokhilchenko
I'm asking targen "enlarge the target" function which means to add new patches to
existing data optimally.
This is not straightforward. Either the existing measurements need to be carried through
the .ti2 and .ti3 files, entailing changes to many programs that deal with them,
or the existing patches need to be worked around and then deleted from the .ti1 before
it is saved.

If I had to do it, I'd favour the latter approach, but there are many more urgent
things to work on in Argyll.

Graeme Gill.
Nikolay Pokhilchenko
2013-12-18 07:20:21 UTC
Permalink
Post by Graeme Gill
This is not straightforward. Either the existing measurements need to be carried through
the .ti2 and .ti3 files, entailing changes to many programs that deal with them,
or the existing patches need to be worked around and then deleted from the .ti1 before
it is saved. I've requested the second, just generation the .ti1, with new pathes only. Then this additional target can be combined with initial one by average utility.
If I had to do it, I'd favour the latter approach, but there are many more urgent
things to work on in Argyll. Hope it will be realized but it's not urgent. Thank You for the system!
Graeme Gill
2013-12-18 04:39:28 UTC
Permalink
Post by BC Rider
That's why I see merely merging the two patch sets as non-optimum. Since they are
unaware of each other the combined points will not be optimally placed in the device
space. If Targen was aware of the existing data set, it could take those into account
in placing the remaining 1000 data points to best evenly distribute the errors.
Maybe. A drawback to this idea is that the already measured points are fixed, so
new points have to work around them, rather that being able to optimise the position
of the whole set. (This drawback applies to any of the points created other than
the full spread ones.) This can lead to some less than ideal "gaps" between points.

An illustration using a 1D analogy: Say the first set of point is spaced at
a distance of 3 units, while an even distribution for the second set would
space points at 2 units. If you try and add points to the first set, you
can either add them in-between making the spacing 1.5 units, or not at all,
leaving it at 3.
Post by BC Rider
However, even if Targen is aware of the existing 500 data points it may not (in fact,
probably not) be able to achieve the same level as optimization as if it had the 1500
data points to freely place.
Exactly.
Post by BC Rider
For this reason, it seems to me there is a trade-off in the choice of making a
preconditioning profile or not. For a given number of patches (ie 1500) the benefit of
defining the device space for Targen must surpass the loss of those data points in the
making of final profile. It may not always be worth it. Or at least not worth
consuming many of the available patches. My suggestion to "carry forward" the
preconditioning data points in Targen to the final profile was an attempt to make this
trade-off less critical.
Measuring a preconditioning set is a bootstrap. On re-profiling, you'd simply
use the previous profile.
Post by BC Rider
Actually patch IDs become non-sensical, sample numbers repetitious and non-monotonic
(etc.) so the resulting .ti3 file seemed a tad incoherent to me. Hence my comment.
colprof doesn't care what they are called or what order they are in. All it cares about
is that they map device values to CIE values.

Graeme Gill.
Nikolay Pokhilchenko
2013-12-18 08:34:20 UTC
Permalink
Post by Graeme Gill
Post by BC Rider
If Targen was aware of the existing data set, it could take those into account
in placing the remaining 1000 data points to best evenly distribute the errors.
Maybe. A drawback to this idea is that the already measured points are fixed, so
new points have to work around them, rather that being able to optimise the position
of the whole set. (This drawback applies to any of the points created other than
the full spread ones.) This can lead to some less than ideal "gaps" between points.
An illustration using a 1D analogy: Say the first set of point is spaced at
a distance of 3 units, while an even distribution for the second set would
space points at 2 units. If you try and add points to the first set, you
can either add them in-between making the spacing 1.5 units, or not at all,
leaving it at 3. Analogy is clear. But often I have a situation when high degree of curvature is discovered in first profiling run. The patches density in color space (not in the device space) of initial target may vary in order of 3 or more times between different regions. For example between the darks and whites. If continue 1D the analogy, I have a line with unevenly spaced points, 0.5 units at a start and 3 units at the end. So there is no need to add the points at the beginning if I wish gain 1.5 units spacing or better because the start of the line already have even more points than wanted. That's why I'm asking a feature. By cost of some gaps possibility it can eliminate excessive processing of space regions and grant a better processing of poor processed regions of color space.
Michael Darling
2013-12-15 02:10:39 UTC
Permalink
Very interested in jumping in this discussion. Have done some testing on
this process, but want to do some more work on it.

I've tried successive profiles on a couple of medias, with similar results:
* ICC-A made using 1800 patches (-v2 -d2 -G -e8 -B8 -f1800)
* ICC-B made using 1800 patches, using ICC-A as preliminary profile (-v2
-d2 -G -e8 -B8 -f1800 -c ICC-A.icm)
* ICC-Combined made using 3600 patches, combining the data used in ICC-A
and ICC-B

I then generated 500 (near) in gamut patches, to test "accuracy", using
each ICC profile as a preconditioned profile. (-v2 -d2 -G -f500 -c <<each
ICC>>)

Printed the same 500 patches using each ICC profile. (Granted, as gamut of
each ICC profile is slighly different, there's a "problem" of only using
one of them as a precondition for generating them.)

targen->printtarg creates RAW RGB values to be sent to the printer. We can
get the LAB value predicted to be read, by using Photoshop to *assign* each
ICC profile. (Assigning doesn't change RGB values, just gives a way to
translate RGB to LAB.) Then, the program PatchTool can extract the LAB
values from the TIFF, and easily compare them to measurement files.

Then, reading the *one* set of 500 patches, and comparing the measured LAB
values against the predicted LAB values.

What I've found on the couple of medias I've done this with, is that my
average deltaE goes down from ICC-A to -B, and from -B to -Combined, but
the worst deltaE goes up considerably. Example, average of ICC-A 0.81,
ICC-B 0.60, ICC-Combined 0.51. Example, worst of ICC-A 2.75, ICC-B 3.98,
ICC-Combined 4.22.



Not knowing exactly how ICC profiles work internally, it would seem to me
like if you made 256*256*256 test patches (~ 16.7 million, talking about
150 rolls of 44" x 40') and argyll could crunch that, you should wind up
with "dead on" predictions.

Therefore, it would seem to me like as you create a feedback loop, using a
profile to precondition and wrapping bots sets into the next profile, you
should keep getting closer to perfect. I would hope the process would
"seek" out the bad areas, so you wouldn't have to go through an obscene
amount of iterations to get a good way there.

Perhaps I experienced an odd set of events. Perhaps a sample size of more
than 500 LAB values for accuracy would show different results. Perhaps
more patches makes overall accuracy increase, at the expense of a few going
bad, and I'm not understanding how the internals of ICC profiles work.
Perhaps I'd need to not ignore the slight gamut differences of the
profiles, and generate a test set of 500 patches for each. (Just seems to
me like it's better to have a shared set of 500 patches, for ICC profiles
of the SAME media.)
Post by BC Rider
Hi,
If I make a prelim profile with a 500 patch target and then a final
profile using 1000 patch target, I have printed 1500 patches that I'd like
colprof to use. However, I understand colprof only uses the final 1000
patch target. In that sense the patches used to make a preliminary
profile are "wasted" (for lack of a better term). Is this understanding
correct?
In my simple brain, if I tell Targen to generate a 1000 patch target while
giving it a preliminary profile that was made with 500 patch target, then
Targen should assume the original 500 data points in its calculations and
generate 1000 more unique data points. Then Targen should carry forward
the original 500 data points so that chartread can combine them into an
output with 1500 unique data points (as input to colprof).
Basically I'm trying to wring every possible value out of the 1500 data
points printed and I'm not sure Argyll currently does this. Of course it
is entirely possible I've got it all wrong and not making any sense. If so
can someone please straighten me out! Thanks!
Graeme Gill
2013-12-18 04:17:35 UTC
Permalink
Post by Michael Darling
What I've found on the couple of medias I've done this with, is that my
average deltaE goes down from ICC-A to -B, and from -B to -Combined, but
the worst deltaE goes up considerably. Example, average of ICC-A 0.81,
ICC-B 0.60, ICC-Combined 0.51. Example, worst of ICC-A 2.75, ICC-B 3.98,
ICC-Combined 4.22.
Hard to know why without a detailed analysis and lots of testing, but
one possibility is that the device (or conceivably instrument) have
drifted between the runs. That's one of the dangers of a patchwork
characterisation, or in doing verifications. It's great if your device
has perfectly reproducible behaviour, but can lead a less cohesive
profile if it drifts. Another possibility is that the change in effective
point weighting has made the profile conform more closely in some areas,
at the expense of others. Another possibility is that some of the verification
points are slightly out of gamut.
Post by Michael Darling
Not knowing exactly how ICC profiles work internally, it would seem to me
like if you made 256*256*256 test patches (~ 16.7 million, talking about
150 rolls of 44" x 40') and argyll could crunch that, you should wind up
with "dead on" predictions.
Device and instrument inconsistency work against this, as does any
drift within the measurement run. A random patch distribution helps
turn systematic drift errors into random errors, but it is an additional
source of error which will be less evident in shorter test runs.
Post by Michael Darling
Therefore, it would seem to me like as you create a feedback loop, using a
profile to precondition and wrapping bots sets into the next profile, you
should keep getting closer to perfect. I would hope the process would
"seek" out the bad areas, so you wouldn't have to go through an obscene
amount of iterations to get a good way there.
See <http://www.argyllcms.com/doc/refine.html> for another approach to this.
Ultimately it's limited by repeatability and the finite resolution a device
space can be measured.

Graeme Gill.
Michael Darling
2013-12-15 02:10:41 UTC
Permalink
Very interested in jumping in this discussion. Have done some testing on
this process, but want to do some more work on it.

I've tried successive profiles on a couple of medias, with similar results:
* ICC-A made using 1800 patches (-v2 -d2 -G -e8 -B8 -f1800)
* ICC-B made using 1800 patches, using ICC-A as preliminary profile (-v2
-d2 -G -e8 -B8 -f1800 -c ICC-A.icm)
* ICC-Combined made using 3600 patches, combining the data used in ICC-A
and ICC-B

I then generated 500 (near) in gamut patches, to test "accuracy", using
each ICC profile as a preconditioned profile. (-v2 -d2 -G -f500 -c <<each
ICC>>)

Printed the same 500 patches using each ICC profile. (Granted, as gamut of
each ICC profile is slighly different, there's a "problem" of only using
one of them as a precondition for generating them.)

targen->printtarg creates RAW RGB values to be sent to the printer. We can
get the LAB value predicted to be read, by using Photoshop to *assign* each
ICC profile. (Assigning doesn't change RGB values, just gives a way to
translate RGB to LAB.) Then, the program PatchTool can extract the LAB
values from the TIFF, and easily compare them to measurement files.

Then, reading the *one* set of 500 patches, and comparing the measured LAB
values against the predicted LAB values.

What I've found on the couple of medias I've done this with, is that my
average deltaE goes down from ICC-A to -B, and from -B to -Combined, but
the worst deltaE goes up considerably. Example, average of ICC-A 0.81,
ICC-B 0.60, ICC-Combined 0.51. Example, worst of ICC-A 2.75, ICC-B 3.98,
ICC-Combined 4.22.



Not knowing exactly how ICC profiles work internally, it would seem to me
like if you made 256*256*256 test patches (~ 16.7 million, talking about
150 rolls of 44" x 40') and argyll could crunch that, you should wind up
with "dead on" predictions.

Therefore, it would seem to me like as you create a feedback loop, using a
profile to precondition and wrapping bots sets into the next profile, you
should keep getting closer to perfect. I would hope the process would
"seek" out the bad areas, so you wouldn't have to go through an obscene
amount of iterations to get a good way there.

Perhaps I experienced an odd set of events. Perhaps a sample size of more
than 500 LAB values for accuracy would show different results. Perhaps
more patches makes overall accuracy increase, at the expense of a few going
bad, and I'm not understanding how the internals of ICC profiles work.
Perhaps I'd need to not ignore the slight gamut differences of the
profiles, and generate a test set of 500 patches for each. (Just seems to
me like it's better to have a shared set of 500 patches, for ICC profiles
of the SAME media.)
Post by BC Rider
Hi,
If I make a prelim profile with a 500 patch target and then a final
profile using 1000 patch target, I have printed 1500 patches that I'd like
colprof to use. However, I understand colprof only uses the final 1000
patch target. In that sense the patches used to make a preliminary
profile are "wasted" (for lack of a better term). Is this understanding
correct?
In my simple brain, if I tell Targen to generate a 1000 patch target while
giving it a preliminary profile that was made with 500 patch target, then
Targen should assume the original 500 data points in its calculations and
generate 1000 more unique data points. Then Targen should carry forward
the original 500 data points so that chartread can combine them into an
output with 1500 unique data points (as input to colprof).
Basically I'm trying to wring every possible value out of the 1500 data
points printed and I'm not sure Argyll currently does this. Of course it
is entirely possible I've got it all wrong and not making any sense. If so
can someone please straighten me out! Thanks!
Graeme Gill
2013-12-18 03:48:31 UTC
Permalink
Post by carlo rondinelli
targen -v -d2 -G -B64 -N1 -f1658 Pro_9500_II_xyz
Unless you _know_ that it is helping you, don't use -N1 as the default
emphasis is already more than can be justified by the visual importance
of the neutral axis compared to other parts of the gamut.

Using so many black patches (-B64) has dangers. You might be distorting
the profile with such a heavy weighting on the black point, thereby reducing
its accuracy in the near black region.

Graeme Gill.
carlo rondinelli
2013-12-20 17:33:23 UTC
Permalink
Thank you all, so if I start from a command line, like the one below, it
would be recommended?

targen -v -d2 -G -B16 -N1 -f1658 Pro_9500_II_xyz

And then optimize later?

I'm not very experienced, I do not know how to optimize an ICC profile
from an existing profile. Where can I find information or description of
the procedure for dummies? :-)

Sorry and Thank you.
--
Carlo Rondinelli
Still-life, Fotografia Immersiva e Object per Virtual Tour
mob: + 39 389 9757042
skype: carlopano360
flickr.com/photos/carlorondinelli
alessiasalera.com

Avvertenze ai sensi del D.L.G.S. 196/2003
Le informazioni contenute in questo messaggio
e/o nel/i file/s allegato/i, sono da considerarsi
strettamente riservate.
Il loro utilizzo è consentito esclusivamente
al destinatario del messaggio per le finalità
indicate nel messaggio stesso.
Colui che riceve il presente messaggio
è gentilmente pregato di verificare se lo stesso
non gli sia pervenuto per errore.
In tal caso il destinatario è pregato di avvisare
il mittente e, tenuto conto delle responsabilità
connesse all'indebito utilizzo e/o divulgazione del
messaggio e/o delle informazioni in esso contenute,
di cancellare l'originale oltre a distruggere
ogni copia o stampa.
Nikolay Pokhilchenko
2013-12-20 21:11:42 UTC
Permalink
Post by carlo rondinelli
Thank you all, so if I start from a command line, like the one below, it
would be recommended?
targen -v -d2 -G -B16 -N1 -f1658 Pro_9500_II_xyz
We recommend You neither -B16 nor -N1 but previous profile. ArgllCMS targen is good enough to do their job better without Your directions. If You have a profile for needed media and printing mode it worth to provide current profile to the targen.

targen -v -d2 -G -f1658 -c"Already_have_similar.icc" Pro_9500_II_xyz

IMO.

Loading...