Discussion:
nVidia Optimus External Monitor Calibration and Profiling
Leonardo Facchin
2014-10-07 10:18:55 UTC
Permalink
Hello,

I'm trying to calibrate and profile a Dell U2410 external display
connected through an HDMI port, in extended desktop configuration, to my
Asus A56CB-XX175H laptop running MS Windows 8.1 64 bits.
The laptop graphics system makes use of the nVidia Optimus technology.
Basically, the laptop has an Integrated Intel HD 4000 Processing Unit
and a Discrete nVidia GT 740M GPU.
According to the Optimus Technology White Paper, the Discrete GPU is NOT
connected to any kind of display output. The default behaviour is for
the GPU to be powered down into an idle state and for the IGP to handle
the graphics. When the Operating System together with the nVidia drivers
detects a graphics workload that could benefit from the faster Dedicated
GPU, the nVidia card is woken up and handles all the processing. Once
the frame has been calculated, the data are copied to the Integrated
Processing Unit buffer and the IGP acts as a video display controller
and sends the output to the monitor.

The default behaviour is for the system to act transparently, without
the user ever knowing which card is handling the workload at any given
time, but the user can override the default behaviour by forcing an
association between any executable and the preferred graphics card.

My intention was to force Photoshop and Bridge to make use of the nVidia
GPU, since it looks like Adobe implemented a few features that actually
benefit from the Discrete GPU computing power.

The problem is that right now I'm totally lost and confused about how
the "two" graphics cards actually interact with the calibration and
profiling process. Let's assume that I could force both the relevant
Adobe softwares to run with the nVidia GPU (which I did) and the Argyll
CMS components to do the same (which I didn't: even if I associate the
Argyll executables to the nVidia GPU, when I run them the GPU remains
idle, at least according to the nVidia Optimus GPU State Viewer, a
little software meant to show when the GPU is powered up and running).
Still, how can I know how many LUTs the system can access and which
graphics card control each of them? Could the two cards be sharing the
LUTs? Does it even make sense? Sorry if the questions look stupid, but I
really have no idea how a graphics card works in detail and all the
sources I could access are either way too generic to be useful or too
detailed and technical for me to be able to parse the informations.

When I access the Windows Color Management Control Panel the two
displays are listed as follows:
1. Generic PnP Monitor - Intel(R) HD Graphics 4000
2. Dell U2410(HDMI) - Intel(R) HD Graphics 4000
which might be coherent with the info from the White Paper about the
nVidia Card not being connected directly to any video output but only to
the Intel IGP.

I know that the easiest solutions would probably be to stick to the
Integrated Processing Unit and be done with it, but on the one hand the
total confusion is undermining my trust in the accuracy of my color
managed workflow and on the other I just developed the curiosity to
understand a little better how this system works.

Any help, including pointing me to relevant documentation, is very
welcome. Thanks in advance,

Leonardo Facchin
--
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>
Alan Goldhammer
2014-10-07 12:11:30 UTC
Permalink
This sounds like a hybrid system where the nVidia GPU is used to do
calculations on an as needed basis but the Intel HD graphics is responsible
for the video output. IF this is the case you should be able to
conventionally profile the display and not have any further worries as the
IGP handles the colors and not the nVidia GPU. I don't know how much RAM
you have on this laptop as a simple Google search only showed that base
units have 4GB RAM. Depending on what type of work you do with Photoshop
that is really on the short side as the minimum recommendation for desktop
PCs is 16GB so you could hit bottlenecks because of RAM and not your GPU set
up.



Alan



From: argyllcms-bounce-***@public.gmane.org [mailto:argyllcms-bounce-***@public.gmane.org]
On Behalf Of Leonardo Facchin
Sent: Tuesday, October 07, 2014 6:19 AM
To: argyllcms-***@public.gmane.org
Subject: [argyllcms] nVidia Optimus External Monitor Calibration and
Profiling



Hello,

I'm trying to calibrate and profile a Dell U2410 external display connected
through an HDMI port, in extended desktop configuration, to my Asus
A56CB-XX175H laptop running MS Windows 8.1 64 bits.
The laptop graphics system makes use of the nVidia Optimus technology.
Basically, the laptop has an Integrated Intel HD 4000 Processing Unit and a
Discrete nVidia GT 740M GPU.
According to the Optimus Technology White Paper, the Discrete GPU is NOT
connected to any kind of display output. The default behaviour is for the
GPU to be powered down into an idle state and for the IGP to handle the
graphics. When the Operating System together with the nVidia drivers detects
a graphics workload that could benefit from the faster Dedicated GPU, the
nVidia card is woken up and handles all the processing. Once the frame has
been calculated, the data are copied to the Integrated Processing Unit
buffer and the IGP acts as a video display controller and sends the output
to the monitor.

The default behaviour is for the system to act transparently, without the
user ever knowing which card is handling the workload at any given time, but
the user can override the default behaviour by forcing an association
between any executable and the preferred graphics card.

My intention was to force Photoshop and Bridge to make use of the nVidia
GPU, since it looks like Adobe implemented a few features that actually
benefit from the Discrete GPU computing power.

The problem is that right now I'm totally lost and confused about how the
"two" graphics cards actually interact with the calibration and profiling
process. Let's assume that I could force both the relevant Adobe softwares
to run with the nVidia GPU (which I did) and the Argyll CMS components to do
the same (which I didn't: even if I associate the Argyll executables to the
nVidia GPU, when I run them the GPU remains idle, at least according to the
nVidia Optimus GPU State Viewer, a little software meant to show when the
GPU is powered up and running).
Still, how can I know how many LUTs the system can access and which graphics
card control each of them? Could the two cards be sharing the LUTs? Does it
even make sense? Sorry if the questions look stupid, but I really have no
idea how a graphics card works in detail and all the sources I could access
are either way too generic to be useful or too detailed and technical for me
to be able to parse the informations.

When I access the Windows Color Management Control Panel the two displays
are listed as follows:
1. Generic PnP Monitor - Intel(R) HD Graphics 4000
2. Dell U2410(HDMI) - Intel(R) HD Graphics 4000
which might be coherent with the info from the White Paper about the nVidia
Card not being connected directly to any video output but only to the Intel
IGP.

I know that the easiest solutions would probably be to stick to the
Integrated Processing Unit and be done with it, but on the one hand the
total confusion is undermining my trust in the accuracy of my color managed
workflow and on the other I just developed the curiosity to understand a
little better how this system works.

Any help, including pointing me to relevant documentation, is very welcome.
Thanks in advance,

Leonardo Facchin
--
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>
Leonardo Facchin
2014-10-08 07:58:09 UTC
Permalink
Hello and thanks for the quick replies and the suggestions!

@Alan
What you are suggesting about mine being an hybrid system, and the
consequences thereof, makes sense. It's actually one of the possible
scenarios I envisioned. My problem is that I would like to confirm that
hypothesis but I can't find enough informations to do so. Maybe I should
just email the manufacturer and ask. It should have been my first move,
actually.

About PS and the minimum required RAM. Your are right: the laptop comes
with 4GB RAM. I added a second bank bringing the total to 8GB, the
maximum my system can handle, as far as I know. A 16GB desktop system as
a requirement for Photoshop to run efficiently looks too high in my
opinion. The Adobe website gives much lower figures. I think it depends
on what kind of files you work with. I mostly use it to post process my
12 MegaPixels camera pictures and in my experience 8GB are enough to
work on a few files at the same time without the memory filling up and
the OS having to dump data to the Hard Drive. It also depends on the
extensiveness of the editing and the number of layers used.

@Pictus
Thanks for the link to your tutorial! Like I said in my first e-mail, my
problem is first and foremost a theoretical one: understanding how the
specific hardware configuration works so that I can predict how it
"reacts" to calibration and profiling and making sure that it behaves
the way I expect it to.
Anyway, I'm very curious about your experience with the AdobeRGB preset
of the U2410. I always calibrated the monitor in Standard mode accessing
the factory default setting, because that's what most sources suggested
as the best fit. I'll try your method and compare the results. :)

Best Regards,
Leonardo
Post by Alan Goldhammer
This sounds like a hybrid system where the nVidia GPU is used to do
calculations on an as needed basis but the Intel HD graphics is
responsible for the video output. IF this is the case you should be
able to conventionally profile the display and not have any further
worries as the IGP handles the colors and not the nVidia GPU. I don't
know how much RAM you have on this laptop as a simple Google search
only showed that base units have 4GB RAM. Depending on what type of
work you do with Photoshop that is really on the short side as the
minimum recommendation for desktop PCs is 16GB so you could hit
bottlenecks because of RAM and not your GPU set up.
Alan
*Sent:* Tuesday, October 07, 2014 6:19 AM
*Subject:* [argyllcms] nVidia Optimus External Monitor Calibration and
Profiling
Hello,
I'm trying to calibrate and profile a Dell U2410 external display
connected through an HDMI port, in extended desktop configuration, to
my Asus A56CB-XX175H laptop running MS Windows 8.1 64 bits.
The laptop graphics system makes use of the nVidia Optimus technology.
Basically, the laptop has an Integrated Intel HD 4000 Processing Unit
and a Discrete nVidia GT 740M GPU.
According to the Optimus Technology White Paper, the Discrete GPU is
NOT connected to any kind of display output. The default behaviour is
for the GPU to be powered down into an idle state and for the IGP to
handle the graphics. When the Operating System together with the
nVidia drivers detects a graphics workload that could benefit from the
faster Dedicated GPU, the nVidia card is woken up and handles all the
processing. Once the frame has been calculated, the data are copied to
the Integrated Processing Unit buffer and the IGP acts as a video
display controller and sends the output to the monitor.
The default behaviour is for the system to act transparently, without
the user ever knowing which card is handling the workload at any given
time, but the user can override the default behaviour by forcing an
association between any executable and the preferred graphics card.
My intention was to force Photoshop and Bridge to make use of the
nVidia GPU, since it looks like Adobe implemented a few features that
actually benefit from the Discrete GPU computing power.
The problem is that right now I'm totally lost and confused about how
the "two" graphics cards actually interact with the calibration and
profiling process. Let's assume that I could force both the relevant
Adobe softwares to run with the nVidia GPU (which I did) and the
Argyll CMS components to do the same (which I didn't: even if I
associate the Argyll executables to the nVidia GPU, when I run them
the GPU remains idle, at least according to the nVidia Optimus GPU
State Viewer, a little software meant to show when the GPU is powered
up and running).
Still, how can I know how many LUTs the system can access and which
graphics card control each of them? Could the two cards be sharing the
LUTs? Does it even make sense? Sorry if the questions look stupid, but
I really have no idea how a graphics card works in detail and all the
sources I could access are either way too generic to be useful or too
detailed and technical for me to be able to parse the informations.
When I access the Windows Color Management Control Panel the two
1. Generic PnP Monitor - Intel(R) HD Graphics 4000
2. Dell U2410(HDMI) - Intel(R) HD Graphics 4000
which might be coherent with the info from the White Paper about the
nVidia Card not being connected directly to any video output but only
to the Intel IGP.
I know that the easiest solutions would probably be to stick to the
Integrated Processing Unit and be done with it, but on the one hand
the total confusion is undermining my trust in the accuracy of my
color managed workflow and on the other I just developed the curiosity
to understand a little better how this system works.
Any help, including pointing me to relevant documentation, is very
welcome. Thanks in advance,
Leonardo Facchin
--
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>
--
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>
Kai-Uwe Behrmann
2014-10-08 08:21:08 UTC
Permalink
Post by Alan Goldhammer
This sounds like a hybrid system where the nVidia GPU is used to do
calculations on an as needed basis but the Intel HD graphics is
responsible for the video output. IF this is the case you should be
able to conventionally profile the display and not have any further
worries as the IGP handles the colors and not the nVidia GPU.
agreed

Just some notes about the details. Calibration is a on-the-fly
correction inside the output graphics card. In your case that will be
handled by the IGP. If a second GPU delivers some frames to the first
one does not matter at all. Colour conversions inside shaders should be
handled by both graphic cards equally well - just processing time might
be different.

kind regards
Kai-Uwe
Leonardo Facchin
2014-10-09 12:15:18 UTC
Permalink
Kai-Uwe, thanks for the reply and the explanation.
It's reassuring to know that I was overthinking the whole issue and that
the solution was much simpler than I thought at first.

Thanks everybody for helping out, thanks Graeme for an awesome,
well-documented software and Florian for writing a nice GUI that is very
helpful to people who are just moving their first steps in the
calibration process.

Best Regards,
Leonardo Facchin
Post by Kai-Uwe Behrmann
Post by Alan Goldhammer
This sounds like a hybrid system where the nVidia GPU is used to do
calculations on an as needed basis but the Intel HD graphics is
responsible for the video output. IF this is the case you should be
able to conventionally profile the display and not have any further
worries as the IGP handles the colors and not the nVidia GPU.
agreed
Just some notes about the details. Calibration is a on-the-fly
correction inside the output graphics card. In your case that will be
handled by the IGP. If a second GPU delivers some frames to the first
one does not matter at all. Colour conversions inside shaders should be
handled by both graphic cards equally well - just processing time might
be different.
kind regards
Kai-Uwe
--
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>
Pictus
2014-10-07 18:55:00 UTC
Permalink
Hello Leonardo,
Post by Leonardo Facchin
I'm trying to calibrate and profile a Dell U2410 external display
connected through an HDMI port, in extended desktop configuration, 
to my Asus A56CB-XX175H laptop running MS Windows 8.1 64 bits.
I can only help with the Dell2410, see http://www.dpreview.com/forums/post/53900941
--
Best regards,
Pictus mailto:pictus171-***@public.gmane.org
Graeme Gill
2014-10-13 05:54:02 UTC
Permalink
Leonardo Facchin wrote:

Hi,
Post by Leonardo Facchin
The laptop graphics system makes use of the nVidia Optimus technology.
Beware - it's been reported that igfxpers.exe that gets installed with nVidia "Optimus"
technology interferes with Video LUT loading. I'm told that you may have to disable both
the igfx tray module (c:\windows\system32\igfxtray.exe) and the igfxpph Module
(c:\windows\system32\igfxpph.dll) in addition to the persistence Module
(c:\windows\system32\igfxpers.exe). I have no idea what effect that has on the system
operation though.

If nVidia have fixed these problems it would be good to know, so that I
can update the documentation.
Post by Leonardo Facchin
The problem is that right now I'm totally lost and confused about how the "two" graphics
cards actually interact with the calibration and profiling process. Let's assume that I
could force both the relevant Adobe softwares to run with the nVidia GPU (which I did) and
the Argyll CMS components to do the same (which I didn't: even if I associate the Argyll
executables to the nVidia GPU, when I run them the GPU remains idle, at least according to
the nVidia Optimus GPU State Viewer, a little software meant to show when the GPU is
powered up and running).
ArgyllCMS doesn't use the GPU, so I wouldn't expect an association to do anything.
Post by Leonardo Facchin
Still, how can I know how many LUTs the system can access and which graphics card control
each of them? Could the two cards be sharing the LUTs? Does it even make sense?
The Video LUT is logically associated with a Video output port, so if it's been
implemented correctly, there should be only a single set of LUTs, probably in the
Intel hardware if it's driving the HDMI port, but in any case there is only
a single operating system API per display. Now, they may messed all that
up (hence the feedback that igfxpers.exe is a problem.)
Post by Leonardo Facchin
I know that the easiest solutions would probably be to stick to the Integrated Processing
Unit and be done with it, but on the one hand the total confusion is undermining my trust
in the accuracy of my color managed workflow and on the other I just developed the
curiosity to understand a little better how this system works.
It's usually not that hard to figure out what's going on. Use dispwin -c to
clear the VideoLUTs, and dispwin strange.cal to set a noticeable LUT. That
will tell you whether calibration loading is working as expected.
<http://www.argyllcms.com/doc/dispwin.html>

Graeme Gill.
Leonardo Facchin
2014-10-13 13:03:40 UTC
Permalink
Hi,
yes, I knew about the issue you are warning me about, by reading
Argyll's documentation. So I recently disabled all the processes you
mentioned, just to be on the safe side. I don't know if nVidia solved
the problem or not. Before disabling the processes I never noticed the
OS flushing my LUT clean, but since I only use the external monitor for
sporadic photographic work I never really thoroughly tested the issue. I
will try to run some tests as soon as I have a chance.

Thanks again,
Leonardo Facchin
Post by Graeme Gill
Hi,
Post by Leonardo Facchin
The laptop graphics system makes use of the nVidia Optimus technology.
Beware - it's been reported that igfxpers.exe that gets installed with nVidia "Optimus"
technology interferes with Video LUT loading. I'm told that you may have to disable both
the igfx tray module (c:\windows\system32\igfxtray.exe) and the igfxpph Module
(c:\windows\system32\igfxpph.dll) in addition to the persistence Module
(c:\windows\system32\igfxpers.exe). I have no idea what effect that has on the system
operation though.
If nVidia have fixed these problems it would be good to know, so that I
can update the documentation.
Post by Leonardo Facchin
The problem is that right now I'm totally lost and confused about how the "two" graphics
cards actually interact with the calibration and profiling process. Let's assume that I
could force both the relevant Adobe softwares to run with the nVidia GPU (which I did) and
the Argyll CMS components to do the same (which I didn't: even if I associate the Argyll
executables to the nVidia GPU, when I run them the GPU remains idle, at least according to
the nVidia Optimus GPU State Viewer, a little software meant to show when the GPU is
powered up and running).
ArgyllCMS doesn't use the GPU, so I wouldn't expect an association to do anything.
Post by Leonardo Facchin
Still, how can I know how many LUTs the system can access and which graphics card control
each of them? Could the two cards be sharing the LUTs? Does it even make sense?
The Video LUT is logically associated with a Video output port, so if it's been
implemented correctly, there should be only a single set of LUTs, probably in the
Intel hardware if it's driving the HDMI port, but in any case there is only
a single operating system API per display. Now, they may messed all that
up (hence the feedback that igfxpers.exe is a problem.)
Post by Leonardo Facchin
I know that the easiest solutions would probably be to stick to the Integrated Processing
Unit and be done with it, but on the one hand the total confusion is undermining my trust
in the accuracy of my color managed workflow and on the other I just developed the
curiosity to understand a little better how this system works.
It's usually not that hard to figure out what's going on. Use dispwin -c to
clear the VideoLUTs, and dispwin strange.cal to set a noticeable LUT. That
will tell you whether calibration loading is working as expected.
<http://www.argyllcms.com/doc/dispwin.html>
Graeme Gill.
--
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>
Loading...